Skip to content

YZCU/PHTrack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The official implementation for "PHTrack: Prompting for Hyperspectral Video Tracking"

Abstract

Hyperspectral (HS) video captures continuous spectral information of objects, enhancing material identification in tracking tasks. It is expected to overcome the inherent limitations of RGB and multi-modal tracking, such as finite spectral cues and cumbersome modality alignment. However, HS tracking faces challenges like data anxiety, band gaps, and huge volumes. In this study, inspired by prompt learning in language models, we propose the Prompting for Hyperspectral Video Tracking (PHTrack) framework. PHTrack learns prompts to adapt foundation models, mitigating data anxiety and enhancing performance and efficiency. First, the modality prompter (MOP) is proposed to capture rich spectral cues and bridge band gaps for improved model adaptation and knowledge enhancement. Additionally, the distillation prompter (DIP) is developed to refine cross-modal features. PHTrack follows feature-level fusion, effectively managing huge volumes compared to traditional decision-level fusion fashions. Extensive experiments validate the proposed framework, offering valuable insights for future research. The code and data will be available at https://github.com/YZCU/PHTrack.

Install

git clone https://github.com/YZCU/PHTrack.git

Environment

  • CUDA 11.8
  • Python 3.9.18
  • PyTorch 2.0.0
  • Torchvision 0.15.0
  • numpy 1.25.0

Usage

  • Download the RGB/Hyperspectral training/test datasets GOT-10K, DET, LaSOT, COCO, YOUTUBEBB (code: v7s6), VID, HOTC.
  • Download the pretrained model: pretrained model (code: abcd) to pretrained_models/.
  • Please train the PHTrack based on the foundation model (code: abcd).
  • We will release the well-trained model of PHTrack (code: abcd).
  • The generated model will be saved to the path of tools/snapshot.
  • Please test the model. The results will be saved in the path of tools/results/OTB100.
  • For evaluation, please download the evaluation benchmark Toolkit and vlfeat for more precision performance evaluation.
  • Refer to HOTC for evaluation.
  • Evaluation of the PHTrack tracker. Run \tracker_benchmark_v1.0\perfPlot.m
  • Relevant tracking results are provided in PHTrack\tracking_results\hotc20test. More evaluation results are provided in a PHTrack\tracking_results.

🏃Keep updating🏃: More detailed tracking results for PHTrack have been released.


For more comprehensive results and codes, please review the upcoming manuscript.

Contact

If you have any questions or suggestions, feel free to contact me.
Email: yuzeng_chen@whu.edu.cn

Citation

If you find our work helpful in your research, kindly consider citing it. We appreciate your support.

@ARTICLE{10680554,
  author={Chen, Yuzeng and Tang, Yuqi and Su, Xin and Li, Jie and Xiao, Yi and He, Jiang and Yuan, Qiangqiang},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={PHTrack: Prompting for Hyperspectral Video Tracking}, 
  year={2024},
  volume={},
  number={},
  pages={1-1},
  keywords={Feature extraction;Video tracking;Photonic band gap;Adaptation models;Visualization;Correlation;Anxiety disorders;Hyperspectral video tracking;Prompt learning;Self-expression model;Material information},
  doi={10.1109/TGRS.2024.3461316}}
  keywords={},
  doi={}

About

[TGRS 2024] PHTrack: Prompting for Hyperspectral Video Tracking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages