-
Massive & Unified Data: It integrates over 6 million open-source trajectories to form the largest unified dataset for robotic manipulation, providing a strong foundation for generalization.
-
Innovative Action Paradigm: It pioneers Action Manifold Learning (AML), which directly predicts clean actions instead of noise, resulting in a more efficient and stable model.
-
Modular 3D Perception: It supports plug-and-play modules to enhance 3D spatial understanding, improving execution precision for complex tasks.
[2026-2-27] 🥳🥳ABot-M0's The weights and inference code have been released. And updated the latest result of ABot-M0 on RoboTwin2.0 to 86.1. The full content will be released soon.🎉🎉
[2026-2-11] 🥳🥳ABot-M0's technical report have been released. Weights and codes are coming soon. 🎉🎉
Create the required environment through the following steps:
# Clone the repo
git clone https://github.com/amap-cvlab/ABot-Manipulation.git
git clone https://github.com/facebookresearch/vggt.git
cd ABot-Manipulation
# Create conda environment
conda create -n ABot python=3.10 -y
conda activate ABot
# Install requirements
pip install -r requirements.txt
# Install FlashAttention2
pip install flash-attn --no-build-isolation
# Install vggt
pip install -e path_to_vggt
# Install ABot
pip install -e .
| Model Name | Huggingface Repository | Description |
|---|---|---|
| ABot-LIBERO | 🤗 ABot-M0-LIBERO | ABot trained solely on LIBERO for evaluation on LIBERO and zero-shot generalization to LIBERO-Plus. |
| ABot-RoboCasa-GR1-Tabletop | 🤗 ABot-M0-Robocasa | ABot trained on RoboCasa-GR1-Tabletop for evaluation. |
| ABot-Robotwin2 | 🤗 ABot-M0-RoboTwin2 | ABot trained on Robotwin2 Clean and Randomized for evaluation. |
Please refer to the guidance in the examples folder to evaluate the benchmarks.
| LIBERO | LIBERO-PLUS | RoboCasa-GR1-Tabletop | RoboTwin2.0 | |
|---|---|---|---|---|
| ABot-M0 | 98.6 | 80.5 | 58.3 | 86.1 |
If you find ABot is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry:
@article{yang2026abot,
title={ABot-M0: VLA Foundation Model for Robotic Manipulation with Action Manifold Learning},
author={Yang, Yandan and Zeng, Shuang and Lin, Tong and Chang, Xinyuan and Qi, Dekang and Xiao, Junjin and Liu, Haoyun and Chen, Ronghan and Chen, Yuzhi and Huo, Dongjie and others},
journal={arXiv preprint arXiv:2602.11236},
year={2026}
}
This project builds upon starVLA, Qwen3-VL, vggt, JiT, LeRobot and Isaac-GR00T. We thank these teams for their open-source contributions.

