This repository is an extension of our ICCV conference paper, "Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding." We propose a robust unsupervised framework that eliminates the need for expensive traffic labels. By leveraging an Uncertainty Mining Branch and a Domain-Specific Prior Enhancement Block, our method bridges the gap between natural and driving scenes. Furthermore, we introduce RoboMixup and the DriverAttention-C benchmark to address the challenges of corruption and central bias in real-world autonomous driving.
2025-12-24🚀 We released DriverAttention-C, a comprehensive benchmark with 126k+ frames for robustness evaluation.2025-01-15📝 Our extended work is available on arXiv.2023-08-07🎉 Original work accepted by ICCV 2023! Check theiccvbranch for the conference code.
To systematically evaluate robustness, we introduce DriverAttention-C, comprising over 126k frames across synthetic and real-world scenarios. It features 49k+ manually re-annotated frames to ensure ground truth validity under adverse conditions.
| Data Type | Subset | Images | Corruption Categories | Manual Annotations |
|---|---|---|---|---|
| Synthetic | BDD-A-C, DR(eye)VE-C, DADA-C | 115,332 | Noise, Blur, Digital, Weather | 38,444 |
| Real-world | DriverAttention-Snow-C | 10,743 | Authentic Snowy Scenes | 10,743 |
The datasets and ground truth labels can be downloaded via:
- Synthetic subsets: Images/Camera Effects | Adverse Weather
- Real-world (Snow): Images | Ground Truth
Extensive experiments demonstrate that our unsupervised method matches or surpasses state-of-the-art fully supervised approaches, reducing corruption degradation by 7.2% and mitigating central bias by 11.2% in terms of KLD.
# Corruption Robustness Training
python train_robo_cor.py --name exp_name --data-path path/to/data --topK 8 --mix_dir temp_dir
# Mitigating Central Bias Training
python train_longtail.py --name rcpreg --data-path path/to/data --batch-size 4# Calculate KLD and CC
python test_cor.py --data-path path/to/data --save_model model_nameNote: For SIM, AUC-Borji, AUC-Judd, NSS, please follow the implementation provided in SaliencyMamba metrics.
We demonstrate the importance of attention prediction in autonomous driving decision-making.
- Prepare data following BDD-OIA.
- Train the decision model utilizing attention ROIs:
python train_decision.py --name test_ --atten_model {infer_dir} --data-path path/to/dataWe would like to thank the authors of SaliencyMamba for their contribution to the community. Part of evaluation metrics code is integrated from SaliencyMamba metrics.
@article{qi2025towards,
title={Towards Robust Unsupervised Attention Prediction in Autonomous Driving},
author={Qi, Mengshi and Bi, Xiaoyang and Ma, Huadong},
journal={arXiv preprint arXiv:2501.15045},
year={2025}
}
@inproceedings{zhu2023unsupervised,
title={Unsupervised self-driving attention prediction via uncertainty mining and knowledge embedding},
author={Zhu, Pengfei and Qi, Mengshi and Li, Xia and Li, Weijian and Ma, Huadong},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={8558--8568},
year={2023}
}



