Yuang Ai*, Jiaming Han*, Shaobin Zhuang*, Weijia Mao, Xuefeng Hu, Ziyan Yang, Zhenheng Yang, Huaibo Huang†, Xiangyu Yue†, Hao Chen*†‡
* Equal Contribution † Corresponding Author ‡ Project Lead
For visual generation, discrete autoregressive models often struggle with poor tokenizer reconstruction, difficulties in sampling from large vocabularies, and slow token-by-token generation speeds. We present BitDance, which addresses these challenges via a large-vocabulary binary tokenizer, a binary diffusion head for sampling in large discrete space, and a next-patch diffusion paradigm that enables efficient multitoken prediction. BitDance is an open-source discrete autoregressive foundation model with 14B parameters, trained on large-scale multimodal tokens. While maintaining the standard language modeling paradigm for text tokens, BitDance employs a next-patch diffusion paradigm for visual tokens to predict multiple tokens in parallel—up to 64 per step. This unified multimodal framework is simple, efficient, scalable, and capable of efficiently generating high-resolution, photorealistic images.
- 2026.2.18: Special thanks to @Bili-Sakura for providing the diffusers version for all BitDance models!
-
2026.2.17: We release UniWeTok, An Unified Binary Tokenizer with Codebook Size
$\mathit{2^{128}}$ for Unified Multimodal Large Language Model. Checkout the UniWeTok README for more details! - 2026.2.17: We release the website, demo, model, and paper for BitDance.
BitDance is a purely autoregressive multimodal generative model. It adopts a decoder-only architecture with three key components: a large-vocabulary binary tokenizer, a binary diffusion head, and a next-patch diffusion paradigm for efficient multi-token prediction. This allows BitDance to predict up to 64 visual tokens in parallel, significantly improving speed. After large-scale training, it surpasses open-source AR models on text-to-image benchmarks and achieves a speedup of over 30x compared to standard next-token prediction AR models.
1️⃣ Binary Visual Tokenizers
We release three binary tokenizers with different downsampling ratios and vocabulary sizes. All model weights and configs can be found at BitDance-Tokenizer.
| Vocabulary Size | Down Ratio | IN-256 PSNR | IN-256 SSIM | Weight | Config |
|---|---|---|---|---|---|
| 16 | 24.90 | 0.72 | ae_d16c32 | ae_d16c32 | |
| 32 | 23.26 | 0.67 | ae_d32c128 | ae_d32c128 | |
| 32 | 25.29 | 0.74 | ae_d32c256 | ae_d32c256 |
2️⃣ T2I Models
We offer two models, BitDance-14B-64x and BitDance-14B-16x, which can predict 64 and 16 tokens in parallel at each step, respectively.
| Model | #Token per Step | Step-1024px | Supported Size | Huggingface |
|---|---|---|---|---|
| BitDance-14B-64x | 64 | 64 | 1024px | BitDance-14B-64x |
| BitDance-14B-16x | 16 | 256 | 512&1024px | BitDance-14B-16x |
3️⃣ ImageNet-trained Models
To reproduce the experiments on ImageNet, check here for details.
| Model | Resolution | Params | Step-256px | FID | Huggingface |
|---|---|---|---|---|---|
| BitDance-B-1x | 256x256 | 242M | 256 | 1.68 | BitDance_B_1x.pt |
| BitDance-B-4x | 256x256 | 260M | 64 | 1.69 | BitDance_B_4x.pt |
| BitDance-B-16x | 256x256 | 260M | 16 | 1.91 | BitDance_B_16x.pt |
| BitDance-L-1x | 256x256 | 527M | 256 | 1.31 | BitDance_L_1x.pt |
| BitDance-H-1x | 256x256 | 1.0B | 256 | 1.24 | BitDance_H_1x.pt |
Thanks to @Bili-Sakura, all diffusers version models can be found at here.
1️⃣ Create Conda Environment and Install Package
git clone https://github.com/shallowdream204/BitDance.git
cd BitDance
conda create -n bitdance python=3.11 -y
conda activate bitdance
pip install -r requirements.txt
pip install flash_attn==2.8.2 --no-build-isolation2️⃣ Download Model Weights
Run the following scripts to download all T2I models.
hf download shallowdream204/BitDance-14B-64x --local-dir models/BitDance-14B-64x --max-workers=16
hf download shallowdream204/BitDance-14B-16x --local-dir models/BitDance-14B-16x --max-workers=163️⃣ T2I Inference (check here for the supported image resolution)
# example_t2i.py
from modeling.t2i_pipeline import BitDanceT2IPipeline
model_path = 'models/BitDance-14B-64x'
# model_path = 'models/BitDance-14B-16x'
device = 'cuda'
pipe = BitDanceT2IPipeline(model_path=model_path, device=device)
prompt = "A close-up portrait in a cinematic photography style, capturing a girl-next-door look on a sunny daytime urban street. She wears a khaki sweater, with long, flowing hair gently draped over her shoulders. Her head is turned slightly, revealing soft facial features illuminated by realistic, delicate sunlight coming from the left. The sunlight subtly highlights individual strands of her hair. The image has a Canon film-like color tone, evoking a warm nostalgic atmosphere."
image = pipe.generate(
prompt=prompt,
height=1024,
width=1024,
num_sampling_steps=50, # adjust to 25 steps for faster inference, but may slightly reduce quality
guidance_scale=7.5,
num_images=1,
seed=42
)[0]
image.save("example.png")Check here for detailed instructions of diffusers version.
Example inference script for BitDance-14B-64x:
hf download BiliSakura/BitDance-14B-64x-diffusers --local-dir BitDance-14B-64x-diffusers --max-workers=16
cd BitDance-14B-64x-diffusers
python test_bitdance.py🔥 Try the Huggingface Space demo to start playing with BitDance: BitDance-Demo
You can also run the demo locally:
python app.pyWe provide the scripts for evaluation on DPG Bench and GenEval. More benchmark evaluation scripts are coming soon.
Evaluation of BitDance-14B-64x Model
bash scripts/eval/eval_bitdance_14b_64x.shEvaluation of BitDance-14B-16x Model
bash scripts/eval/eval_bitdance_14b_16x.shNote you still need to follow the instructions in DPG Bench and GenEval to evaluate the results.
We are organizing the code related to data loading. The training instruction of BitDance is coming soon.
BitDance-14B-64x is distilled from BitDance-14B-16x using a small amount of high-quality data. BitDance-14B-64x achieves approximately a 3x inference speedup while maintaining the same excellent generation quality. Here we present the side-by-side comparison between BitDance-14B-64x and BitDance-14B-16x.
| Model | Open Source | DPG-Bench | GenEval | OneIG-Bench | TIIF-Bench | ||
|---|---|---|---|---|---|---|---|
| EN | ZH | short | long | ||||
| GPT Image 1 | ✗ | 85.15 | 0.84 | 0.533 | 0.474 | 89.15 | 88.29 |
| Seedream 3.0 | ✗ | 88.27 | 0.84 | 0.530 | 0.528 | 86.02 | 84.31 |
| Qwen-Image | ✓ | 88.32 | 0.87 | 0.539 | 0.548 | 86.14 | 86.83 |
| Z-Image | ✓ | 88.14 | 0.84 | 0.546 | 0.535 | 80.20 | 83.01 |
| Z-Image-Turbo | ✓ | 84.86 | 0.82 | 0.528 | 0.507 | 77.73 | 80.05 |
| FLUX.1 [Dev] | ✓ | 83.84 | 0.66 | 0.434 | - | 71.09 | 71.78 |
| BAGEL | ✓ | 85.07 | 0.88 | 0.361 | 0.370 | 71.50 | 71.70 |
| Infinity | ✓ | 83.46 | 0.73 | - | - | 62.07 | 62.32 |
| Janus-Pro | ✓ | 84.19 | 0.80 | 0.267 | 0.240 | 66.50 | 65.01 |
| Show-o2 | ✓ | 86.14 | 0.76 | 0.308 | - | 59.72 | 58.86 |
| NextStep-1 | ✓ | 85.28 | 0.73 | 0.418 | - | - | - |
| GLM-Image | ✓ | 84.78 | - | 0.528 | 0.511 | 81.01 | 81.02 |
| BitDance | ✓ | 88.28 | 0.86 | 0.532 | 0.512 | 79.64 | 78.12 |
BitDance is licensed under the Apache 2.0 license.
If you find our work useful for your research, please consider citing our paper:
@article{ai2026bitdance,
title = {BitDance: Scaling Autoregressive Generative Models with Binary Tokens},
author = {Ai, Yuang and Han, Jiaming and Zhuang, Shaobin and Hu, Xuefeng and Yang, Ziyan and Yang, Zhenheng and Huang, Huaibo and Yue, Xiangyu and Chen, Hao},
journal = {arXiv preprint arXiv:2602.14041},
year = {2026}
}









