(CVPR 2026) Bridging Fidelity-Reality with Controllable One-Step Diffusion for Image Super-Resolution
[Paper] [Supplemental Material]
Hao Chen, Junyang Chen, Jinshan Pan, Jiangxin Dong
IMAG Lab, Nanjing University of Science and Technology
If CODSR is helpful for you, please help star the GitHub Repo. Thanks!
Welcome to visit our website (专注底层视觉领域的信息服务平台) for low-level vision: https://lowlevelcv.com/
An overview of our CODSR. (a) The region-adaptive generative prior activation method introduces gradient-driven adaptive noise to achieve the region-aware activation of generative priors. (b) The LQ-guided feature modulation module exploits the uncompressed LQ information to modulate the diffusion process for restoring faithful structural details. (c) The text-matching guidance strategy harnesses the region maps generated by Grounded-SAM2, which correspond to the textual descriptions, to constrain the text–image interaction regions within the cross-attention layers, thereby enabling effective textual guidance during generation.
-
FaithDiff (CVPR2025) Paper | Code
Unleashing diffusion priors with feature alignment and joint VAE–LDM optimization for faithful SR. -
STCDiT (CVPR 2026) Paper | Code
A motion-aware VAE and anchor-frame-guided DiT framework enables stable video restoration, even under complex camera motions.
- ✅ March 24, 2026. Our testing code and pre-trained model are now available!
- ✅ February 21, 2026. Our CODSR was accepted by CVPR 2026!
- ✅ December 14, 2025. Release CODSR paper.
- Release the training code.
- Release the Gradio Demo and ComfyUI Integration.
Clone repo
git clone https://github.com/Chanson94/CODSR.git
cd CODSRInstall dependent packages
conda create -n CODSR python=3.10 -y
conda activate CODSR
pip install --upgrade pip
pip install -r requirements.txtDownload Models
- CODSR Pre-trained Model
- SD21 Base
- RAM
- DAPE
- Put them in the
./preset/modelsfolder and update the corresponding path in test.sh
sh test.sh
For easy comparison, we provide the inference time testing script for CODSR.
sh test_inference_time.sh
For convenient comparison, we upload the benchmark results to Google Drive. These benchmarks were directly copied from StableSR, SUPIR and FaithDiff. Additionally, we also provide a script for testing IQA (Image Quality Assessment).
sh metrics.sh
If this work is helpful for your research, please consider citing the following BibTeX entry.
@inproceedings{codsr,
title={Bridging Fidelity-Reality with Controllable One-Step Diffusion for Image Super-Resolution},
author={Chen, Hao and Chen, Junyang and Pan, Jinshan and Dong, Jiangxin},
booktitle={CVPR},
year={2026}
}
If you have any questions, please feel free to reach us out at chenhao_jxpyy@njust.edu.cn.
Our project is based on OSEDiff, CoMat and Grounded SAM2. Thanks for their awesome works.
