DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation
Paper • 2506.03123 • Published • 14
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("cszy98/DCM", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]This repository hosts the Dual-Expert Consistency Model (DCM) as presented in the paper Dual-Expert Consistency Model for Efficient and High-Quality Video Generation. DCM addresses the challenge of applying Consistency Models to video diffusion, which often leads to temporal inconsistency and loss of detail. By using a dual-expert approach, DCM achieves state-of-the-art visual quality with significantly reduced sampling steps.
For more information, please refer to the project's Github repository.