Skip to content

xyLiu339/FideDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring

Xiaoyang Liu, Zhengyan Zhou, Zihang Xu, Jiezhang Cao, Zheng Chen and Yulun Zhang, "FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring", arxiv, 2025.

releases visitors GitHub Stars

[arXiv] [supplementary material]


Abstract: Recent advancements in image motion deblurring, driven by CNNs and transformers, have made significant progress. Large-scale pre-trained diffusion models, which are rich in true-world modeling, have shown great promise for high-quality image restoration tasks such as deblurring, demonstrating stronger generative capabilities than CNN and transformer-based methods. However, challenges such as unbearable inference time and compromised fidelity still limit the full potential of the diffusion models. To address this, we introduce FideDiff, a novel single-step diffusion model designed for high-fidelity deblurring. We reformulate motion deblurring as a diffusion-like process where each timestep represents a progressively blurred image, and we train a consistency model that aligns all timesteps to the same clean image. By reconstructing training data with matched blur trajectories, the model learns temporal consistency, enabling accurate one-step deblurring. We further enhance model performance by integrating Kernel ControlNet for blur kernel estimation and introducing adaptive timestep prediction. Our model achieves superior performance on full-reference metrics, surpassing previous diffusion-based methods and matching the performance of other state-of-the-art models. FideDiff offers a new direction for applying pre-trained diffusion models to high-fidelity image restoration tasks, establishing a robust baseline for further advancing diffusion models in real-world industrial applications.

cmp1

πŸ’‘ Methodology

In this work, we build a strong foundation model and augment it with a Kernel ControlNet for detail restoration, and reformulate the forward/backward processes. A consistency-training strategy based on matched blur trajectories further enables one-step sampling, as illustrated below:

ddpm
The whole pipeline:
pipeline

πŸ”Ž Results

Quantitative Results on Full-Reference Metrics.

pipeline

Visual Results on GoPro and RealBlur. For more visualizations, please refer to supplementary material.
grid

πŸ“Ž Citation

@article{liu2025fidediffefficientdiffusionmodel,
        title={FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring}, 
        author={Liu, Xiaoyang and Zhou, Zhengyan and Xu, Zihang and Cao, Jiezhang and Chen, Zheng and Zhang, Yulun},
        journal={arXiv preprint arXiv:2510.01641},
        year={2025}
}

πŸ”— Contents

  • Datasets
  • Models
  • Training
  • Testing
  • Citation

About

[ICLR 2026] FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring

Resources

Stars

Watchers

Forks

Packages