Skip to content

Official Code for the ICCV23 Paper: "Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding"

Notifications You must be signed in to change notification settings

zaplm/DriverAttention

Repository files navigation

Towards Robust Unsupervised Attention Prediction in Autonomous Driving


arXiv Dataset License Original_ICCV

Mengshi Qi1*, Xiaoyang Bi1, Xianlin Zhang1, Huadong Ma1
1Beijing University of Posts and Telecommunications

This repository is an extension of our ICCV conference paper, "Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding." We propose a robust unsupervised framework that eliminates the need for expensive traffic labels. By leveraging an Uncertainty Mining Branch and a Domain-Specific Prior Enhancement Block, our method bridges the gap between natural and driving scenes. Furthermore, we introduce RoboMixup and the DriverAttention-C benchmark to address the challenges of corruption and central bias in real-world autonomous driving.


📢 Release

  • 2025-12-24 🚀 We released DriverAttention-C, a comprehensive benchmark with 126k+ frames for robustness evaluation.
  • 2025-01-15 📝 Our extended work is available on arXiv.
  • 2023-08-07 🎉 Original work accepted by ICCV 2023! Check the iccv branch for the conference code.

📊 DriverAttention-C Benchmark

To systematically evaluate robustness, we introduce DriverAttention-C, comprising over 126k frames across synthetic and real-world scenarios. It features 49k+ manually re-annotated frames to ensure ground truth validity under adverse conditions.

Data Type Subset Images Corruption Categories Manual Annotations
Synthetic BDD-A-C, DR(eye)VE-C, DADA-C 115,332 Noise, Blur, Digital, Weather 38,444
Real-world DriverAttention-Snow-C 10,743 Authentic Snowy Scenes 10,743

Dataset Preparation

The datasets and ground truth labels can be downloaded via:


📈 Results

Extensive experiments demonstrate that our unsupervised method matches or surpasses state-of-the-art fully supervised approaches, reducing corruption degradation by 7.2% and mitigating central bias by 11.2% in terms of KLD.


🛠️ Run

Training (Robustness & Central Bias)

# Corruption Robustness Training 
python train_robo_cor.py --name exp_name --data-path path/to/data --topK 8 --mix_dir temp_dir

# Mitigating Central Bias Training
python train_longtail.py --name rcpreg --data-path path/to/data --batch-size 4

Evaluation

# Calculate KLD and CC
python test_cor.py --data-path path/to/data --save_model model_name

Note: For SIM, AUC-Borji, AUC-Judd, NSS, please follow the implementation provided in SaliencyMamba metrics.

💡 Decision-Making Application

We demonstrate the importance of attention prediction in autonomous driving decision-making.

  1. Prepare data following BDD-OIA.
  2. Train the decision model utilizing attention ROIs:
python train_decision.py --name test_ --atten_model {infer_dir} --data-path path/to/data

🙏 Acknowledgement

We would like to thank the authors of SaliencyMamba for their contribution to the community. Part of evaluation metrics code is integrated from SaliencyMamba metrics.

Citation

@article{qi2025towards,
  title={Towards Robust Unsupervised Attention Prediction in Autonomous Driving},
  author={Qi, Mengshi and Bi, Xiaoyang and Ma, Huadong},
  journal={arXiv preprint arXiv:2501.15045},
  year={2025}
}


@inproceedings{zhu2023unsupervised,
  title={Unsupervised self-driving attention prediction via uncertainty mining and knowledge embedding},
  author={Zhu, Pengfei and Qi, Mengshi and Li, Xia and Li, Weijian and Ma, Huadong},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={8558--8568},
  year={2023}
}

About

Official Code for the ICCV23 Paper: "Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages