- The implementation for "STAR: A Unified Spatiotemporal Fusion Framework for Satellite Video Object Tracking".
- IEEE Transactions on Geoscience and Remote Sensing, 2025.
🏃Keep updating🏃:
- Trained model of STAR has been released.
- Pre-trained model of STAR has been released.
- Training and testing codes of STAR have been released.
- Tracking result of STAR has been released.
| Benchmark | STAR (PR / SR / NPR) |
|---|---|
| SatSOT | 0.639 / 0.537 / |
| SV248S | 0.774 / 0.523 / |
| OOTB | 0.846 / 0.678 / 0.826 |
- Visual analysis, arranged from top to bottom: car_01 (SatSOT), car_61 (SatSOT), 03_000036 (SV248S), 04_000003 (SV248S), 05_000035 (SV248S), ship_10 (OOTB), and train_1 (OOTB).

- Overlap curves and tracking samples of STAR in diverse scenarios. Red --> Our STAR. Blue --> Ground Truth.

git clone https://github.com/YZCU/STAR.git
- CUDA 11.8
- Python 3.9.18
- PyTorch 2.0.0
- Torchvision 0.15.0
- numpy 1.25.0
-
Training: Please download the satellite video training and testing sets: SatSOT-train, SatSOT, SV248S, OOTB.
-
Fast Training: Download the pre-trained model of STAR. Put it into
<pretrained_models>. -
Run
<tracking/train.py>to train STAR. -
The well-trained STAR model is put into
<output/train/yzcu/yzcu-ep150-full-256/yzcu_ep0060.pth.tar>. -
We have also released the well-trained STAR tracking models.
-
Testing: Run
<tracking/test.py>for testing, and results are saved in<output/results/yzcu/yzcu-ep150-full-256>. -
Evaluating: Please download the evaluation benchmark Toolkit and vlfeat for more accurate evaluation.
-
Refer to the Object Tracking Benchmark for detailed evaluations.
-
Evaluation of the STAR tracker. Run
<tracker_benchmark_v1.0\perfPlot.m>
- If you have any questions or suggestions, feel free to contact me.
- Email: yzchen1006@163.com
❤️ ❤️ We sincerely appreciate the insightful feedback provided by Editors and Reviewers. ❤️ ❤️