Zheng Chen, Mingde Zhou, Jinpei Guo, Jiale Yuan, Ji Yifei, and Yulun Zhang, "Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression", AAAI, 2026
[project] [arXiv] [supplementary material] [dataset] [pretrained models]
- 2024-11-08: SODEC is accepted at AAAI 2026. 🎉🎉🎉
- 2025-8-07: This repo is released.
Abstract: Diffusion-based image compression has demonstrated impressive perceptual performance. However, it suffers from two critical drawbacks: (1) excessive decoding latency due to multi-step sampling, and (2) poor fidelity resulting from over-reliance on generative priors. To address these issues, we propose SODEC, a novel single-step diffusion image compression model. We argue that in image compression, a sufficiently informative latent renders multi-step refinement unnecessary. Based on this insight, we leverage a pre-trained VAE-based model to produce latents with rich information, and replace the iterative denoising process with a single-step decoding. Meanwhile, to improve fidelity, we introduce the fidelity guidance module, encouraging outputs that are faithful to the original image. Furthermore, we design the rate annealing training strategy to enable effective training under extremely low bitrates. Extensive experiments show that SODEC significantly outperforms existing methods, achieving superior rate–distortion–perception performance. Moreover, compared to previous diffusion-based compression models, SODEC improves decoding speed by more than 20×.
- Release testing and training code.
- Release pre-trained models.
- Provide WebUI.
- Provide HuggingFace demo.
- Datasets
- Models
- Training
- Testing
- Results
- Acknowledgements
We achieve impressive performance on image compression tasks.
Qualitative Results (click to expand)
- Results in Fig. 5 of the main paper
More Qualitative Results
- Rate-Distortion-Perception Results (Fig. 4 of the supplementary material)
- Visual Comparison Results (Fig. 5 of the supplementary material)
- Extended Qualitative Results (Fig. 6 of the supplementary material)
- Additional Results on DIV2K-val (Fig. 7 of the supplementary material)
- Additional Results on Kodak (Fig. 7 of the supplementary material)
If you find the code helpful in your research or work, please cite the following paper(s).
@inproceedings{chen2026steering,
title={Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression},
author={Chen, Zheng and Zhou, Mingde and Guo, Jinpei and Yuan, Jiale and Ji, Yifei and Zhang, Yulun},
booktitle={AAAI},
year={2026}
}









