[Website] [Paper] [Checkpoints]
Clone with submodules:
git clone --recurse-submodules git@github.com:lihzha/lap.gitIf you already cloned the repository without submodules:
git submodule update --init --recursiveThis project uses uv for Python dependency management. After installing uv, set up the environment with:
GIT_LFS_SKIP_SMUDGE=1 uv sync
GIT_LFS_SKIP_SMUDGE=1 uv pip install -e .Example inference script: scripts/real_robot/droid_main.py
Download the LAP checkpoint from lihzha/LAP-3B and place it at:
./checkpoint/lapJAX_PLATFORMS=cuda uv run --group cuda scripts/serve_policy.py policy:checkpoint --env=LAP- Install the latest DROID package on both the control laptop and the NUC.
- Activate the DROID conda environment on the control laptop.
- Install the OpenPI client used to connect to the policy server:
cd third_party/openpi/packages/openpi-client && pip install -e . - Install
tyrofor CLI parsing:pip install tyro
python scripts/real_robot/droid_main.py \
--external_camera=right \
--left_camera_id=<left_camera_id> \
--right_camera_id=<right_camera_id> \
--wrist_camera_id=<wrist_camera_id>To add support for another robot, use scripts/real_robot/franka_main.py as a reference.
Download the LIBERO checkpoint from lihzha/LAP-3B-Libero and place it at:
./checkpoint/lap_liberoThen follow scripts/libero/README.md.
Training is supported on both GPUs and TPUs.
Train on LIBERO with GPUs:
JAX_PLATFORMS=cuda uv run --group cuda scripts/train.py lap_libero --exp-name=lap_libero --data.rlds_data_dir=<your_data_dir>Train on LIBERO with TPUs:
uv run scripts/train.py lap_libero --exp-name=lap_libero --data.rlds_data_dir=<your_data_dir>Expected dataset layout:
<your_data_dir>/
libero_10_no_noops/
libero_goal_no_noops/
libero_object_no_noops/
libero_spatial_no_noops/
LIBERO RLDS source dataset: openvla/modified_libero_rlds
For custom datasets:
- Arrange datasets in the same directory structure pattern.
- Define your data mixture in src/lap/datasets/utils/mixtures.py.
- Train with:
JAX_PLATFORMS=cuda uv run --group cuda scripts/train.py lap --exp-name=lap_custom --data.rlds_data_dir=<your_data_dir> --data.data-mix=<your_datamix_name>This repository is built on OpenPI.
If this codebase helps your research, please cite:
@misc{zha2026laplanguageactionpretrainingenables,
title={LAP: Language-Action Pre-Training Enables Zero-shot Cross-Embodiment Transfer},
author={Lihan Zha and Asher J. Hancock and Mingtong Zhang and Tenny Yin and Yixuan Huang and Dhruv Shah and Allen Z. Ren and Anirudha Majumdar},
year={2026},
eprint={2602.10556},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2602.10556},
}