Skip to content

sunny2109/DSTNet-plus

Repository files navigation

📖 Learning Efficient Deep Discriminative Spatial and Temporal Networks for Video Deblurring

Hugging Face Models download visitors

Jinshan Pan, Long Sun, Boming Xu, Jiangxin Dong, and Jinhui Tang
IMAG Lab, Nanjing University of Science and Technology


This repo is a official implementation of "Learning Efficient Deep Discriminative Spatial and Temporal Networks for Video Deblurring".

DSTNet+ is an extension of DSTNet.

📜 News

  • 2025.03.25: All pretrained models and visual results are available.
  • 2025.03.25: The paper can be found here.
  • 2025.03.14: This paper is accepted by TPAMI.
  • 2024.01.08: This repo is created.

🚀 Quick Started

1. Environment Set Up

  • Python 3.9, PyTorch == 1.13
  • BasicSR 1.4.2
  • Platforms: Ubuntu 18.04, cuda-11
git clone https://github.com/sunny2109/DSTNet-plus.git
cd DSTNet-plus
conda create -n dstnetplus python=3.9
conda activate dstnetplus
# Install dependent packages
pip install -r requirements.txt
# Install cupy
# Please make sure that the installed Cupy version matches your existing CUDA installation!
pip install cupy-cuda11x
# Install BasicSR
python setup.py develop

2. Download datasets

Used training and testing sets can be downloaded as follows:

Training Set Pretrained model Visual Result
GoPro Hugging Face | Github | Baidu Cloud Hugging Face or Baidu Cloud
DVD Hugging Face | Github | Baidu Cloud Hugging Face or Baidu Cloud
BSD Hugging Face | Github | Baidu Cloud Hugging Face or Baidu Cloud
DAVIS-2017 Hugging Face | Github | Baidu Cloud Hugging Face or Baidu Cloud

3. Run the training code

# train DSTNetPlus on GoPro dataset
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 python basicsr/train.py -opt options/train/train_base_GoPro.yml --launcher pytorch

# train DSTNetPlus on DVD dataset
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 python basicsr/train.py -opt options/train/train_base_DVD.yml --launcher pytorch

# train DSTNetPlus on BSD dataset
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 python basicsr/train.py -opt options/train/train_base_BSD1ms.yml --launcher pytorch

4. Quick inference

  • Download the pretrained models.

Please download the pretrained and put it in ./checkpoints/.

  • Download the testing dataset.

Please download the test dataset and put it in ./datasets/.

  • Run the following commands:
python basicsr/test.py -opt options/test/test_base_GoPro.yml
cd results
python merge_full.py
  • The test results will be in './results'.

👀 Results

We achieve SOTA performance on a set of blurring datasets. Detailed results can be found in the paper. All visual results of DSTNetPlus can be downloaded here.

Click to expand
  • Model efficiency (PSNR vs. Runtime vs. Params)

  • Quantitative evaluations
              Evaluation on GoPro dataset         Evaluation on DVD dataset

  • Quantitative evaluations on the BSD dataset

  • Quantitative evaluations on the Set8 dataset

  • Deblurred results on GoPro dataset

  • Deblurred results on DVD dataset

  • Deblurred results on Real-world blurry frames

📧 Contact

If you have any questions, please feel free to reach us out at cs.longsun@gmail.com

📎 Citation

If you find our work helpful for your research, please consider giving a star ⭐ and citation 📝

@article{DSTNetPlus,
  title={Learning Efficient Deep Discriminative Spatial and Temporal Networks for Video Deblurring},
  author={Pan, Jinshan and Sun, Long and Xu Boming and Dong, Jiangxin and Tang, Jinhui},
  journal={TPAMI},
  year={2025}
}

About

[TPAMI 2025] Learning Efficient Deep Discriminative Spatial and Temporal Networks for Video Deblurring

Resources

License

Stars

Watchers

Forks

Packages

No packages published