video.mp4
Website |
Docs |
Dataset |
Model |
NAVSIM Model |
Supplementary |
Paper
An open-source end-to-end driving stack for CARLA, achieving state-of-the-art closed-loop performance across all major Leaderboard 2.0 benchmarks.
- Table of Contents
- Updates
- Quick Start
- Training for CARLA Leaderboard
- Evaluation on CARLA Leaderboard
- Project Structure
- Beyond CARLA: Cross-Benchmark Deployment
- Further Documentation
- Acknowledgements
- Citation
- License
-
[COMING SOON]Cross-benchmark datasets and training toolsDatasets and documentation for NAVSIM and Waymo training coming soon.
-
[2026/01/13]CARLA dataset and training documentation releasedWe publicly release a CARLA dataset generated with the same pipeline as described in the paper. Note that due to subsequent refactoring and code cleanup, the released dataset differs from the original dataset used in our experiments. Validation is ongoing.
-
[2026/01/05]Removed stop-sign heuristicWe removed explicit stop-sign handling to evaluate the policy in a fully end-to-end setting. This may slightly reduce closed-loop performance compared to earlier runs.
-
[2026/01/05]RoutePlanner bug fixFixed an index error that caused the driving policy to crash at the end of routes in Town13. Driving scores have been updated accordingly.
-
[2025/12/24]Initial releasePaper, checkpoints, expert driver, and inference code are now available.
Clone the repository and map the project root to your environment
git clone https://github.com/autonomousvision/lead.git
cd leadSet up environment variables
echo -e "export LEAD_PROJECT_ROOT=$(pwd)" >> ~/.bashrc # Set project root variable
echo "source $(pwd)/scripts/main.sh" >> ~/.bashrc # Persist more environment variables
source ~/.bashrc # Reload configPlease verify that ~/.bashrc reflects these paths correctly.
We utilize Miniconda, conda-lock and uv:
# Install conda-lock and create conda environment
pip install conda-lock && conda-lock install -n lead conda-lock.yml
# Activate conda environment
conda activate lead
# Install dependencies and setup git hooks
pip install uv && uv pip install -r requirements.txt && uv pip install -e .
# Install other tools needed for development
conda install -c conda-forge ffmpeg parallel tree gcc zip unzip
# Optional: Activate git hooks
pre-commit installWhile waiting for dependencies installation, we recommend CARLA setup on parallel be:
bash scripts/setup_carla.sh # Download and setup CARLA at 3rd_party/CARLA_0915Pre-trained checkpoints are hosted on HuggingFace for reproducibility. These checkpoints follow the TFv6 architecture, but differ in their sensor configurations, vision backbones or dataset composition.
| Description | Bench2Drive | Longest6 v2 | Town13 | Checkpoint |
|---|---|---|---|---|
| Full TransFuser V6 | 95.2 | 62 | 5.24 | Link |
| ResNet34 backbone with 60M parameters | 94.7 | 57 | 5.01 | Link |
| Rear camera as additional input | 95.1 | 53 | TBD | Link |
| Radar sensor removed | 94.7 | 52 | TBD | Link |
| Vision only driving | 91.6 | 43 | TBD | Link |
| Removed Town13 from training set | 93.1 | 52 | 3.52 | Link |
Table 1: Performance of pre-trained checkpoints. We report Driving Score, for which higher is better.
To download one single checkpoint for test purpose:
bash scripts/download_one_checkpoint.shOr download all checkpoints at once with git lfs
git clone https://huggingface.co/ln2697/tfv6 outputs/checkpoints
cd outputs/checkpoints
git lfs pullTo initiate closed-loop evaluation and verify the setup, execute the following:
# Start driving environment
bash scripts/start_carla.sh
# Start policy on one route
bash scripts/eval_bench2drive.shDriving logs will be saved to outputs/local_evaluation with the following structure:
outputs/local_evaluation/1_town15_construction
├── 1_town15_construction_debug.mp4
├── 1_town15_construction_demo.mp4
├── 1_town15_construction_input.mp4
├── checkpoint_endpoint.json
├── debug_images
├── demo_images
├── input_images
├── input_log
├── infractions.json
├── metric_info.json
└── qualitative_results.mp4Launch the interactive infraction dashboard to analyze driving failures:
python lead/infraction_webapp/app.pyNavigate to http://localhost:5000 to access the dashboard
webapp.mp4
Video 2: Interactive infraction analysis tool for model evaluation.
Tip
- Disable video recording in config_closed_loop by turning off
produce_demo_videoandproduce_debug_video. - If memory is limited, modify the file prefixes to load only the first checkpoint seed. By default, the pipeline loads all three seeds as an ensemble.
Verify the expert policy and data acquisition pipeline by executing a test run on a sample route:
# Start CARLA if not done already
bash scripts/start_carla.sh
# Run expert on one route
bash scripts/run_expert.shData collected will be stored at data/expert_debug and should have following structure:
data/expert_debug/
├── data
│ └── debug_routes
│ └── Town15_Rep-1_1_town15_construction_route0_01_15_12_48_52
│ ├── bboxes
│ ├── camera_pc
│ ├── camera_pc_perturbated
│ ├── depth
│ ├── depth_perturbated
│ ├── hdmap
│ ├── hdmap_perturbated
│ ├── instance
│ ├── instance_perturbated
│ ├── lidar
│ ├── metas
│ ├── radar
│ ├── radar_perturbated
│ ├── results.json
│ ├── rgb
│ ├── rgb_perturbated
│ ├── semantics
│ └── semantics_perturbated
└── results
└── 1_town15_construction_result.jsonThe Jupyter notebooks provide some example scripts to visualize the collected data:
Figure 2: Plotting with visualization notebooks.
For a more detailed documentation, take a look at the documentation page.
Download the CARLA dataset from HuggingFace using git lfs:
git clone https://huggingface.co/datasets/ln2697/lead_carla data/carla_leaderboard2/zip
cd data/carla_leaderboard2/zip
git lfs pullOr download a single route for testing:
bash scripts/download_one_route.shUnzip the downloaded routes:
bash scripts/unzip_routes.shBuild persistent cache for faster data loading during training:
python scripts/build_cache.pyStart pretraining:
bash scripts/pretrain.shFor multi-GPU training with Distributed Data Parallel:
bash scripts/pretrain_ddp.shTraining logs and checkpoints will be saved to outputs/local_training/pretrain.
Fine-tune the pretrained model with planning decoder enabled:
bash scripts/posttrain.shFor multi-GPU training:
bash scripts/posttrain_ddp.shPost-training checkpoints will be saved to outputs/local_training/posttrain. We also include TensorBoard and WandB logging
Figure 3: WandB logging with debug plot.
For distributed training on SLURM, see this documentation page. For a complete SLURM workflow of pre-training, post-training, evaluation, see this example.
For a more detailed documentation, take a look at the evaluation documentation.
Start the CARLA simulator before running evaluation:
bash scripts/start_carla.shRun closed-loop evaluation on the Bench2Drive benchmark:
bash scripts/eval_bench2drive.shRun closed-loop evaluation on the Longest6 v2 benchmark:
bash scripts/eval_longest6.shRun closed-loop evaluation on the Town13 benchmark:
bash scripts/eval_town13.shResults will be saved to outputs/local_evaluation/ with videos, infractions, and metrics.
If CARLA becomes unresponsive, clean up zombie processes:
bash scripts/clean_carla.shFor distributed evaluation across multiple routes and benchmarks, see the SLURM evaluation documentation. For large-scale evaluation we also provide a WandB logger.
Figure 4: Example online WandB logging during evaluation.
The project is organized into several key directories:
lead- Main Python package containing model architecture, training, inference, and expert driver3rd_party- Third-party dependencies (CARLA, benchmarks, evaluation tools)data- Route definitions for training and evaluation. Sensor data will also be stored here.scripts- Utility scripts for data processing, training, and evaluationoutputs- Model checkpoints, evaluation results, and visualizationsnotebooks- Jupyter notebooks for data inspection and analysisslurm- SLURM job scripts for large-scale experiments
For a detailed breakdown of the codebase organization, see the project structure documentation.
The LEAD pipeline and TFv6 models are deployed as reference implementations and benchmark entries across multiple autonomous driving simulators and evaluation suites:
-
Waymo Vision-based End-to-End Driving Challenge (DiffusionLTF) Strong baseline entry for the inaugural end-to-end driving challenge hosted by Waymo, achieving 2nd place in the final leaderboard.
-
NAVSIM v1 (LTFv6) Latent TransFuser v6 is an updated reference baseline for the
navtestsplit, improving PDMS by +3 points over the Latent TransFuser baseline, used to evaluate navigation and control under diverse driving conditions. -
NAVSIM v2 (LTFv6) The same Latent TransFuser v6 improves EPMDS by +6 points over the Latent TransFuser baseline, targeting distribution shift and scenario complexity.
-
NVIDIA AlpaSim Simulator (TransFuserModel) Adapting the NAVSIM's Latent TransFuser v6 checkpoints, AlpaSim also features an official TransFuser driver, serving as a baseline policy for closed-loop simulation.
For more detailed instructions, see the full documentation. In particular:
Special thanks to carla_garage for the foundational codebase. We also thank the creators of the numerous open-source projects we use:
Other helpful repositories:
Long Nguyen led development of the project. Kashyap Chitta, Bernhard Jaeger, and Andreas Geiger contributed through technical discussion and advisory feedback. Daniel Dauner provided guidance with NAVSIM.
If you find this work useful, please consider giving this repository a star ⭐ and citing our work in your research:
@article{Nguyen2025ARXIV,
title={LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving},
author={Nguyen, Long and Fauth, Micha and Jaeger, Bernhard and Dauner, Daniel and Igl, Maximilian and Geiger, Andreas and Chitta, Kashyap},
journal={arXiv preprint arXiv:2512.20563},
year={2025}
}This project is released under the MIT License. See LICENSE for details.
