SDSFusion: A Semantic-Aware Infrared and Visible Image Fusion Network for Degraded Scenes
This is official code of "SDSFusion: A Semantic-Aware Infrared and Visible Image Fusion Network for Degraded Scenes".
conda env create -f SDSFusion.yaml
Firstly, you need to download the train dataset and eval dataset. Links for the training are: train-google drive. Links for the evaluation are: eval-google drive.
Secondly, you need to set the test_only variable in the two option.py and one main.py to False. e.g.,
parser.add_argument('--test_only', action='store_true', default=False, help='set this option to test the model')
Thirdly, you need to run the main_train.py for the enhancement and main.py for the fusion.
During training, the corresponding train_model folder is used to store the training weight.
The checkpoint are stored in pretrain, which can ben downloaded from: ckpt-google drive.
Set the test_only to be true. e.g.,
parser.add_argument('--test_only', action='store_true', default=True, help='set this option to test the model')
To get the coarse enhancement results, you can run the main_test.py in enhance_stage1, and they are stored in ./datasets/test/LLVIP/vi_en-s1.
To get the fine enhancement results, you can run the main_test.py in enhance_stage2, and they are stored in ./datasets/test/LLVIP/vi_en-s2.
To get the fused results, you need select the stage variable (stage1/stage2) in the main.py from fusion, and the fusion result will be placed in ./datasets/test/LLVIP/If-s1 or ./datasets/test/LLVIP/If-s2. e.g.,
parser.add_argument('--stage', type=str, default='stage1') # or stage2
numpy=1.15.0
opencv-python=4.1.0.25
python=3.7.0
torch=1.8.0
torchvision=0.9.0
@article{11014600,
title={SDSFusion: A Semantic-Aware Infrared and Visible Image Fusion Network for Degraded Scenes},
author={Chen, Jun and Yang, Liling and Yu, Wei and Gong, Wenping and Cai, Zhanchuan and Ma, Jiayi},
journal={IEEE Transactions on Image Processing},
volume={34},
pages={3139-3153},
year={2025}
}