We introduce DC-AR, a novel masked autoregressive (AR) text-to-image generation framework that delivers superior image generation quality with exceptional computational efficiency. Due to the tokenizers' limitations, prior masked AR models have lagged behind diffusion models in terms of quality or efficiency. We overcome this limitation by introducing DC-HT— a deep compression hybrid tokenizer for AR models that achieves a 32x spatial compression ratio while maintaining high reconstruction fidelity and cross-resolution generalization ability. Building upon DC-HT, we extend MaskGIT and create a new hybrid masked autoregressive image generation framework that first produces the structural elements through discrete tokens and then applies refinements via residual tokens. DC-AR achieves state-of-the-art results with a gFID of 5.49 on MJHQ-30K and an overall score of 0.69 on GenEval, while offering 1.5-7.9x higher throughput and 2.0-3.5x lower latency compared to prior leading diffusion and masked autoregressive models.
Online demo: https:/dc-ar.hanlab.ai
DC-AR is a masked autoregressive (AR) text-to-image generation framework that delivers superior quality with exceptional efficiency. It offers 1.5–7.9x faster throughput and 2.0–3.5x lower latency than other leading models while achieving state-of-the-art results with a gFID of 5.49 on MJHQ-30K and an overall score of 0.69 on GenEval.
The success to DC-AR comes from two major aspects:
1. DC-HT: a deep compression hybrid tokenizer for AR models that achieves a 32x spatial compression ratio.
2. DC-AR: a hybrid masked AR generation framework that first produces the structural elements through discrete tokens and then applies refinements via residual tokens
Due to the tokenizers’ limitations, prior masked AR models have lagged behind diffusion models in terms of quality or efficiency. To overcome this challenge, we build DC-HT, a deep compression hybrid tokenizer for AR models that achieve a spatial compression ratio of 32x while maintaining high reconstruction fidelity and cross-resolution generalization ability.
The compression ratio of 32x is the key for efficiency boost, bringing 4x less tokens and great speedup compared to conventional 16x AR tokenizer. And to mitigate the information loss during such tokenization of such high compression ratio and ensure the decent reconstruction performance, we adopt the hybrid tokenization technique from HART. But training a hybrid tokenizer directly is challenging, and we decompose the training process into three adaptation stages to secure the reconstruction peformance for DC-HT, as shown in the following figure.
To coordinate with our hybrid tokenizer, we design a hybrid generation framework utilizing both discrete tokens and residual tokens. Compared to prior AR methods that are either discrete-only (MaskGIT) or continuous-only (MAR), our hybrid generation process involves first generating all discrete tokens and then generating residual tokens afterwards. Then we sum up discrete and residual tokens to get the final image. During training, we mask a random subset of discrete tokens and train the transformer model to predict these masked tokens using a cross-entropy loss for the discrete part and diffusion loss for the residual part.
DC-AR can achieve leading performance in the aspect of both visual quality and image-text alignment, while offering 1.5-7.9x higherthroughput and 2.0-3.5x lower latency compared to other leading methods.
@article{wu2025dcar,
title={DC-AR: Efficient Masked Autoregressive Image Generation with Deep Compression Hybrid Tokenizer},
author={Wu, Yecheng and Chen, Junyu and Zhang, Zhuoyang and Xie, Enze and Yu, Jincheng and Chen, Junsong and Hu, Jinyi and Lu, Yao and Han, Song and Cai, Han},
journal={arXiv preprint arXiv:2410.10733},
year={2025}
}