The official PyTorch implementation of the CDAAT tracker:
The tracking pipeline is illustrated as:

The Colour-Depth Aware Attention Module is as:
Install the environment using Anaconda
conda create -n cdaat python=3.7
conda activate cdaat
sudo apt-get install libturbojpeg
bash install.sh
-
- Clone our repository to your local project directory.
- Download the training datasets (LaSOT, GOT-10k, TrackingNet, COCO2017, RGBD1K and DepthTrack).
- Prepare the test datasets (CDTB, RGBD1K and DepthTrack).
- Edit the PATH in
lib/test/evaluation/local.pyandlib/train/adim/local.pyto the proper absolute path.
The training process contains two stages:
-
1st stage: train an RGB-only model (4 GPUs)
export PYTHONPATH=/path/to/CDAAT:$PYTHONPATH python -m torch.distributed.launch --nproc_per_node=4 ./lib/train/run_training.py --config baseline --save_dir /path/to/save/checkpointsor in single GPU:
python ./lib/train/run_training.py --config baseline --save_dir /path/to/save/checkpoints -
2st stage: train an RGB-D model (4 GPUs).
You can download the 1st-stage pretrained model.
Then, set the pretrained model path in ./experiments/cdaatrack/cdaatrack.yaml(MODEL.PRETRAINED).
python -m torch.distributed.launch --nproc_per_node=4 ./lib/train/run_training.py --config cdaatrack --save_dir /path/to/save/checkpoints
Make sure you have prepared the trained model. You can train it by yourself or download from Google Drive. Edit ./lib/test/evaluation/local.py and ./lib/test/parameter/cdaatrack.py to set the test set path and the pretrained model path, then run
python ./tracking/test.py
You can download the raw results from Google Drive, and evaluate the raw results using the VOT toolkit.
Our idea is implemented base on the following projects. We really appreciate their wonderful open-source work!
If you have any questions or concerns, please feel free to contact us.
