Skip to content

Chanson94/CODSR

Repository files navigation

(CVPR 2026) Bridging Fidelity-Reality with Controllable One-Step Diffusion for Image Super-Resolution

[Paper]   [Supplemental Material]  visitors

Hao Chen, Junyang Chen, Jinshan Pan, Jiangxin Dong
IMAG Lab, Nanjing University of Science and Technology

If CODSR is helpful for you, please help star the GitHub Repo. Thanks!

Welcome to visit our website (专注底层视觉领域的信息服务平台) for low-level vision: https://lowlevelcv.com/


An overview of our CODSR. (a) The region-adaptive generative prior activation method introduces gradient-driven adaptive noise to achieve the region-aware activation of generative priors. (b) The LQ-guided feature modulation module exploits the uncompressed LQ information to modulate the diffusion process for restoring faithful structural details. (c) The text-matching guidance strategy harnesses the region maps generated by Grounded-SAM2, which correspond to the textual descriptions, to constrain the text–image interaction regions within the cross-attention layers, thereby enabling effective textual guidance during generation.

😊 You may also want to check our relevant works:

  1. FaithDiff (CVPR2025) Paper | Code
    Unleashing diffusion priors with feature alignment and joint VAE–LDM optimization for faithful SR.

  2. STCDiT (CVPR 2026) Paper | Code

    A motion-aware VAE and anchor-frame-guided DiT framework enables stable video restoration, even under complex camera motions.

🚩 New Features/Updates

  • ✅ March 24, 2026. Our testing code and pre-trained model are now available!
  • ✅ February 21, 2026. Our CODSR was accepted by CVPR 2026!
  • ✅ December 14, 2025. Release CODSR paper.

To do

  • Release the training code.
  • Release the Gradio Demo and ComfyUI Integration.

🔧 Dependencies and Installation

Clone repo

git clone https://github.com/Chanson94/CODSR.git
cd CODSR

Install dependent packages

conda create -n CODSR python=3.10 -y
conda activate CODSR
pip install --upgrade pip
pip install -r requirements.txt

Download Models

⚡ Quick Inference

sh test.sh

🚀 Calculate Inference Time

For easy comparison, we provide the inference time testing script for CODSR.

sh test_inference_time.sh

📏 Benchmark Results

For convenient comparison, we upload the benchmark results to Google Drive. These benchmarks were directly copied from StableSR, SUPIR and FaithDiff. Additionally, we also provide a script for testing IQA (Image Quality Assessment).

sh metrics.sh

Citation

If this work is helpful for your research, please consider citing the following BibTeX entry.

@inproceedings{codsr,
  title={Bridging Fidelity-Reality with Controllable One-Step Diffusion for Image Super-Resolution},
  author={Chen, Hao and Chen, Junyang and Pan, Jinshan and Dong, Jiangxin},
  booktitle={CVPR},
  year={2026}
}

Contact

If you have any questions, please feel free to reach us out at chenhao_jxpyy@njust.edu.cn.

Acknowledgments

Our project is based on OSEDiff, CoMat and Grounded SAM2. Thanks for their awesome works.

About

[CVPR 2026]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors