MeshMimic: Geometry-Aware Humanoid Motion Learning through 3D Scene Reconstruction
1X-Humanoid, 2HKUST(GZ), 3HKU, 4Tsinghua, 5CUHK, 6SJTU, 7ANU
* Equal Contribution. Corresponding Author.
MeshMimic pipeline

MeshMimic. In-the-wild monocular videos yield long-horizon motions over complex terrains
for contact-consistent motion–terrain interaction learning.

Abstract

Humanoid motion control has witnessed significant breakthroughs in recent years, with deep reinforcement learning (RL) emerging as a primary catalyst for achieving complex, human-like behaviors. However, the high dimensionality and intricate dynamics of humanoid robots make manual motion design impractical, leading to a heavy reliance on expensive motion capture (MoCap) data. These datasets are not only costly to acquire but also frequently lack the necessary geometric context of the surrounding physical environment. Consequently, existing motion synthesis frameworks often suffer from a decoupling of motion and scene, resulting in physical inconsistencies such as contact slippage or mesh penetration during terrain-aware tasks. In this work, we present MeshMimic, an innovative framework that bridges 3D scene reconstruction and embodied intelligence to enable humanoid robots to learn coupled motion-terrain interactions directly from video. By leveraging state-of-the-art 3D vision models, our framework precisely segments and reconstructs both human trajectories and the underlying 3D geometry of terrains and objects. We introduce an optimization algorithm based on kinematic consistency to extract high-quality motion data from noisy visual reconstructions, alongside a contact-invariant retargeting method that transfers human-environment interaction features to the humanoid agent. Experimental results demonstrate that MeshMimic achieves robust, highly dynamic performance across diverse and challenging terrains. Our approach proves that a low-cost pipeline utilizing only consumer-grade monocular sensors can facilitate the training of complex physical interactions, enabling scalable learning in unstructured environments.

Real-Sim-Real Results

Each result has three parts: Left is captured with a consumer monocular RGB camera without MOCAP assistance; Middle is the reconstructed scene with human SMPLX; and Right is the deployed result. Most videos show contact-rich interactions with the environment.

Real-Sim Results

These real-sim results are reconstructed from in-the-wild monocular videos, with the original video on the left and the reconstruction on the right. Most examples involve long-horizon, contact-rich interactions in challenging real-world environments.

BibTeX

@misc{zhang2026meshmimic,
  title         = {MeshMimic: Geometry-Aware Humanoid Motion Learning through 3D Scene Reconstruction},
  author        = {Zhang, Qiang and Ma, Jiahao and Liu, Peiran and Shi, Shuai and Su, Zeran and Wang, Zifan and Sun, Jingkai and Cui, Wei and Yu, Jialin and Han, Gang and Zhao, Wen and Sun, Pihai and Yin, Kangning and Wang, Jiaxu and Cao, Jiahang and Zhang, Lingfeng and Cheng, Hao and Hao, Xiaoshuai and Ji, Yiding and Liang, Junwei and Tang, Jian and Xu, Renjing and Guo, Yijie},
  year          = {2026},
  eprint        = {2602.15733},
  archivePrefix = {arXiv},
  primaryClass  = {cs.RO},
  url           = {https://arxiv.org/abs/2602.15733}
}