The official complete code for paper "Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching" [Paper/arXiv]
- Original datasets
- VIS-NIR patch dataset [Link1] [Link2]
- VIS-LWIR patch dataset [Link1] [Link2]
- OS patch dataset [Link1] [Link2]
- The datasets we created from original datasets (can be used directly in our demo)
- Download the dataset.
Click download datasets
Unzip the downloaded compressed package to the root directory of the project.
-
Creat a Anaconda Virtual Environment.
conda create -n KGL-Net python=3.8 conda activate KGL-Net -
Configure the running environment. (For the dependency library version error that occurs when installing "numpy==1.21.1", just ignore the error. It will not affect subsequent operations.)
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install segmentation_models_pytorch -i https://pypi.tuna.tsinghua.edu.cn/simple pip install PyWavelets -i https://pypi.tuna.tsinghua.edu.cn/simple pip install scikit-image -i https://pypi.tuna.tsinghua.edu.cn/simple pip install albumentations==1.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple pip install scikit-learn matplotlib thop h5py SimpleITK medpy yacs torchinfo pip install opencv-python -i https://pypi.doubanio.com/simple pip install imgaug -i https://pypi.doubanio.com/simple pip install numpy==1.21.1 -i https://pypi.doubanio.com/simple -
Training the model.
The default dataset is OS patch dataset. You can modify the default setting in the code directly. Or use the following command directly.
python train_KGL-Net.py --train_set='os_train' --epoch_max=200 --dim_desc=128 --lr_scheduler='None' #### 'os_train': OS patch dataset, 'country': VIS-NIR patch dataset, 'lwir_train': VIS-LWIR patch dataset -
Testing the Model.
The default dataset is OS patch dataset. You can modify the default setting in the code directly. Or use the following command directly.
python test_KGL-Net.py --train_set='os_train' --train_out_fold_name='train_KGL-Net_HyNet_os_train_epochs_200_sz_64_pt_256_pat_2_dim_128_alpha_2_margin_1_2_drop_0_3_lr_0_005_Adam_None_aug' #### python test_KGL-Net.py --train_set='×××' --train_out_fold_name='***' # '***' denotes the folder name where the generated model is located.
- Quantative Results on the VIS-NIR patch dataset (VIS-NIR):
- Quantative Results on the VIS-LWIR patch dataset (VIS-LWIR):
- Quantative Results on the OS patch dataset (VIS-SAR):
- Qualitative Results:
If you find this repo helpful, please give us a 🤩star🤩. Please consider citing the KGL-Net if it benefits your project.
BibTeX reference is as follows:
@misc{yu2024howknowledgeguidedlearningcrossspectral,
title={Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching},
author={Chuang Yu and Yunpeng Liu and Jinmiao Zhao and Xiangyu Yue},
year={2024},
eprint={2412.11161},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.11161},
}
word reference is as follows:
Chuang Yu, Yunpeng Liu, Jinmiao Zhao, and Xiangyu Yue. Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching. arXiv preprint arXiv:2412.11161, 2024.
- My homepage: [YuChuang]




