Skip to content

xiaomi-research/unidrivevla

Repository files navigation

UniDriveVLA

UniDriveVLA: Unifying Understanding, Perception, and Action Planning for Autonomous Driving

Yongkang Li1,2, Lijun Zhou2, Sixu Yan1, Bencheng Liao1, Tianyi Yan2,3, Kaixin Xiong2, Long Chen2, Hongwei Xie2, Bing Wang2, Guang Chen2, Hangjun Ye2, Wenyu Liu1, Haiyang Sun†2, Xinggang Wang✉1

1Huazhong University of Science and Technology   2Xiaomi EV   3SKL-IOTSC, University of Macau

(†) Project Leader.   (✉) Corresponding Author.

April 3, 2026

Paper PDF Project Page Code huggingface collection huggingface datasets


News

  • 🔥 [2026-04-03] We release the paper, training/inference code, and model weights of UniDriveVLA!

Updates

  • Release paper
  • Release code and training scripts
  • Release model weights on HuggingFace
  • Release nuScenes and Bench2Drive evaluation frameworks
  • Release model on Navsim

Table of Contents


Abstract

Vision-Language-Action (VLA) models have recently emerged in autonomous driving, with the promise of leveraging rich world knowledge to improve the cognitive capabilities of driving systems. However, adapting such models for driving tasks currently faces a critical dilemma between spatial perception and semantic reasoning. Consequently, existing VLA systems are forced into suboptimal compromises: directly adopting 2D Vision-Language Models yields limited spatial perception, whereas enhancing them with 3D spatial representations often impairs the native reasoning capacity of VLMs. We argue that this dilemma largely stems from the coupled optimization of spatial perception and semantic reasoning within shared model parameters. To overcome this, we propose UniDriveVLA, a Unified Driving Vision-Language-Action model based on Mixture-of-Transformers that addresses the perception–reasoning conflict via expert decoupling. Specifically, it comprises three experts for driving understanding, scene perception, and action planning, which are coordinated through masked joint attention. In addition, we combine a sparse perception paradigm with a three-stage progressive training strategy to improve spatial perception while maintaining semantic reasoning capability. Extensive experiments show that UniDriveVLA achieves state-of-the-art performance in open-loop evaluation on nuScenes and closed-loop evaluation on Bench2Drive. Moreover, it demonstrates strong performance across a broad range of perception, prediction, and understanding tasks, including 3D detection, online mapping, motion forecasting, and driving-oriented VQA, highlighting its broad applicability as a unified model for autonomous driving.


Architecture

UniDriveVLA adopts a Mixture-of-Transformers architecture with three specialized experts:

  • Understanding Expert: Leverages a pre-trained 2D VLM (Qwen3-VL) for semantic scene comprehension and driving QA
  • Perception Expert: Introduce sparse perception that extracts spatial priors from 2D VLM features, supporting detection, mapping, occupancy, and motion forecasting
  • Planning Expert: Fuses VLM semantic features and spatial perception features to generate safe, precise trajectories

Getting Started


Checkpoints

nuScenes Open-Loop Results (ST-P3 metrics, without ego status)

Method Backbone L2@1s ↓ L2@2s ↓ L2@3s ↓ Avg. L2 ↓ Col@1s ↓ Col@2s ↓ Col@3s ↓ Avg. Col ↓ Config Weights
UniDriveVLA-Base Qwen3-VL-2B 0.28 0.51 0.82 0.54 0.08 0.13 0.31 0.17 config HuggingFace
UniDriveVLA-Large Qwen3-VL-8B 0.27 0.49 0.77 0.51 0.03 0.10 0.21 0.11 config HuggingFace

Bench2Drive Closed-Loop Results

Closed-loop and Multi-ability Testing Results in CARLA Bench2Drive
Method Closed-loop Metric ↑ Multi-Ability Test (%) ↑
DS Success Efficiency Comfort Merging Overtaking Emerg. Brake GiveWay Traf. Sign Mean
UniDriveVLA (weights) 78.37 51.82 198.86 11.78 38.75 80.00 50.00 30.00 58.95 51.53

Perception Results on nuScenes val

Method Det NDS ↑ Det mAP ↑ Map mAP ↑ Weights
UniDriveVLA-Base 0.434 0.397 0.520 HuggingFace
UniDriveVLA-Large 0.460 0.407 0.535 HuggingFace

All Model Weights (Stage 1 & 2 are intermediate checkpoints for progressive training; Stage 3 is the final model)

Model HuggingFace
UniDriveVLA-Base (nuScenes) Stage 1 owl10/UniDriveVLA_Nusc_Base_Stage1
UniDriveVLA-Base (nuScenes) Stage 2 owl10/UniDriveVLA_Nusc_Base_Stage2
UniDriveVLA-Base (nuScenes) Stage 3 owl10/UniDriveVLA_Nusc_Base_Stage3
UniDriveVLA-Large (nuScenes) Stage 1 owl10/UniDriveVLA_Nusc_Large_Stage1
UniDriveVLA-Large (nuScenes) Stage 2 owl10/UniDriveVLA_Nusc_Large_Stage2
UniDriveVLA-Large (nuScenes) Stage 3 owl10/UniDriveVLA_Nusc_Large_Stage3
UniDriveVLA-Base (Bench2Drive) Stage 1 owl10/UniDriveVLA_B2D_Base_Stage1
UniDriveVLA-Base (Bench2Drive) Stage 2 owl10/UniDriveVLA_B2D_Base_Stage2
UniDriveVLA-Base (Bench2Drive) Stage 3 owl10/UniDriveVLA_B2D_Base_Stage3

Contact

If you have any questions, please contact Yongkang Li via email (liyk@hust.edu.cn).


Acknowledgement

UniDriveVLA is built upon the following outstanding open-source works:

  • Openpi — VLA training framework
  • InternVLA-A1 — VLA model for robotic manipulation
  • HiP-AD — Hierarchical planning for autonomous driving
  • SparseDrive — Sparse 3D perception framework for autonomous driving
  • Bench2Drive — Closed-loop evaluation in CARLA

Citation

If you find UniDriveVLA useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry:

@article{li2026unidrivevla,
  title={UniDriveVLA: Unifying Understanding, Perception, and Action Planning for Autonomous Driving},
  author={Li, Yongkang and Zhou, Lijun and Yan, Sixu and Liao, Bencheng and Yan, Tianyi and Xiong, Kaixin and Chen, Long and Xie, Hongwei and Wang, Bing and Chen, Guang and Ye, Hangjun and Sun, Haiyang and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2604.02190},
  year={2026}
}

About

UniDriveVLA: Unifying Understanding, Perception, and Action Planning for Autonomous Driving

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors