Skip to content

Official code of the paper "Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching"

License

Notifications You must be signed in to change notification settings

YuChuang1205/KGL-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The official complete code for paper "Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching" [Paper/arXiv]

Recently, cross-spectral image patch matching based on feature relation learning has attracted extensive attention. However, performance bottleneck problems have gradually emerged in existing methods. To address this challenge, we make the first attempt to explore a stable and efficient bridge between descriptor learning and metric learning, and construct a knowledge-guided learning network (KGL-Net), which achieves amazing performance improvements while abandoning complex network structures.

KGL-Net

To the best of our knowledge, our KGL-Net is the first to implement hard negative sample mining for metric networks and brings significant performance improvements.

Datasets

  1. Original datasets
  1. The datasets we created from original datasets (can be used directly in our demo)

How to use our code

  1. Download the dataset.

        Click download datasets

        Unzip the downloaded compressed package to the root directory of the project.

  1. Creat a Anaconda Virtual Environment.

     conda create -n KGL-Net python=3.8 
     conda activate KGL-Net 
    
  2. Configure the running environment. (For the dependency library version error that occurs when installing "numpy==1.21.1", just ignore the error. It will not affect subsequent operations.)

     pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
     pip install segmentation_models_pytorch -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install PyWavelets -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install scikit-image -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install albumentations==1.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
     pip install scikit-learn matplotlib thop h5py SimpleITK medpy yacs torchinfo
     pip install opencv-python -i https://pypi.doubanio.com/simple
     pip install imgaug -i https://pypi.doubanio.com/simple
     pip install numpy==1.21.1 -i https://pypi.doubanio.com/simple
    
  3. Training the model.

    The default dataset is OS patch dataset. You can modify the default setting in the code directly. Or use the following command directly.

    python train_KGL-Net.py --train_set='os_train'  --epoch_max=200  --dim_desc=128  --lr_scheduler='None'
    #### 'os_train': OS patch dataset,   'country': VIS-NIR patch dataset,   'lwir_train': VIS-LWIR patch dataset
    
  4. Testing the Model.

    The default dataset is OS patch dataset. You can modify the default setting in the code directly. Or use the following command directly.

    python test_KGL-Net.py  --train_set='os_train'  --train_out_fold_name='train_KGL-Net_HyNet_os_train_epochs_200_sz_64_pt_256_pat_2_dim_128_alpha_2_margin_1_2_drop_0_3_lr_0_005_Adam_None_aug'
    #### python test_KGL-Net.py  --train_set='×××'  --train_out_fold_name='***'    # '***' denotes the folder name where the generated model is located. 
    

Results

  • Quantative Results on the VIS-NIR patch dataset (VIS-NIR):

Results on the VIS-NIR patch dataset

  • Quantative Results on the VIS-LWIR patch dataset (VIS-LWIR):

Results on the VIS-LWIR patch dataset

  • Quantative Results on the OS patch dataset (VIS-SAR):

Results on the OS patch dataset

  • Qualitative Results:

Visualization results

Each cross-spectral scene is from top to bottom: correctly judged as matching, correctly judged as non-matching, misjudged as matching, and misjudged as non-matching.

Citation

If you find this repo helpful, please give us a 🤩star🤩. Please consider citing the KGL-Net if it benefits your project.

BibTeX reference is as follows:

@misc{yu2024howknowledgeguidedlearningcrossspectral,
      title={Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching}, 
      author={Chuang Yu and Yunpeng Liu and Jinmiao Zhao and Xiangyu Yue},
      year={2024},
      eprint={2412.11161},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.11161}, 
}

word reference is as follows:

Chuang Yu, Yunpeng Liu, Jinmiao Zhao, and Xiangyu Yue. Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching. arXiv preprint arXiv:2412.11161, 2024.

Other link

  1. My homepage: [YuChuang]

About

Official code of the paper "Why and How: Knowledge-Guided Learning for Cross-Spectral Image Patch Matching"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages