This work is accepted by IEEE Signal Processing Letter in 2025. Paper
⭐ We carefully design a dedicated meta-prompt learning solution in the realm of multi-modal tracking, injecting sequence-specific evidence through online adaption.
⭐ We propose a unified multi-modal tracker without the prerequisite of feeding any task priors (notification of task type) in both training and test phases.
⭐ This work brings limited degradation on efficiency but consistent improvements
⭐ Visiualisation
🔋The main file is lib/train/actor/vipt.py (update trigger), lib/models/ThreeMT/ostrack_meta_ptompt.py (inner update), and lib/train/trainers/ltr_trainer.py (backward, outter update)
🔽 Please follow ViPT[https://github.com/jiawen-zhu/ViPT] to create your workspace (conda environment and download the pretrained OSTrack).



