Skip to content

mlpc-ucsd/PixARMesh

Repository files navigation

PixARMesh: Autoregressive Mesh-Native Single-View Scene Reconstruction

Xiang Zhang*,1,2,† · Sohyun Yoo*,1 · Hongrui Wu*,1,‡ · Chuan Li2 · Jianwen Xie2 · Zhuowen Tu1

1UC San Diego · 2Lambda, Inc.

CVPR 2026

* Equal contribution

Work partially done during internship at Lambda.

H. Wu contributed during internship at UC San Diego.

PixARMesh Teaser

PixARMesh is a mesh-native autoregressive framework for single-view 3D scene reconstruction.
Instead of reconstructing via intermediate volumetric or implicit representations, PixARMesh directly models instances with native mesh representation. Object poses and meshes are predicted in a unified autoregressive sequence.

This repository contains the official implementation for PixARMesh (CVPR 2026).


🛠️ Environment Setup

We recommend using our pre-built Docker image:

docker pull zx1239856/trl-runner:0.2.0

Alternatively, you can build the environment manually using the provided Dockerfile.

Key requirements:

  • Install dependencies from:
requirements.txt
requirements-no-iso.txt

🗂️ Dataset Preparation

Download the packed dataset from HuggingFace:

https://huggingface.co/datasets/zx1239856/3d-front-ar-packed

Training Only

Flatten the dataset to ensure uniform instance sampling across scenes:

python -m scripts.flatten_dataset

This will generate:

datasets/3d-front-ar-packed-flattened

Flattening prevents instances from scenes with many objects from being under-sampled during training.

Inference / Evaluation Only

Download the following items and unzip them inside the datasets/ directory:

🧠 Training

launch.py is a wrapper around accelerate launch that automatically configures the environment.

PixARMesh uses two-stage training:

  1. Layout prediction
  2. Full autoregressive sequence training

Stage 1 - Layout Prediction

python launch.py train.py --config-name=edgerunner_3d_front_global_obj_pose_w_img_ctx_layout_only

Stage 2 - Full Training

Replace model.local_path with the checkpoint path from Stage 1.

python launch.py train.py --config-name=edgerunner_3d_front_global_obj_pose_w_img_ctx model.local_path=outputs/edgerunner-3d-front-global-obj-pose-w-img-ctx-layout-only/1/checkpoints/final

📊 Evaluation

Distributed inference is supported via Accelerate.

You may either:

  • Use the pretrained model from HuggingFace
  • Provide a path to a local checkpoint

Object-Level

  1. Inference
accelerate launch --module scripts.infer --model-type edgerunner --run-type obj --checkpoint zx1239856/PixARMesh-EdgeRunner --output outputs/inference
  1. Evaluation
accelerate launch --module scripts.eval_obj --pred-dir outputs/inference/obj/edgerunner/gt_layout_gt_mask_pred_depth --save-dir outputs/evaluations-obj/edgerunner

Scene-Level

  1. Inference
accelerate launch --module scripts.infer --model-type edgerunner --run-type scene --checkpoint zx1239856/PixARMesh-EdgeRunner --output outputs/inference
  1. Compose Scene Meshes
python -m scripts.compose_scene --pred-dir outputs/inference/scene/edgerunner/pred_layout_pred_mask_pred_depth
  1. Evaluation
accelerate launch --module scripts.eval_scene --pred-dir outputs/inference/scene/edgerunner/pred_layout_pred_mask_pred_depth/scenes --save-dir outputs/evaluation-scene/edgerunner

🏷️ License

This repository is released under the CC-BY-SA 4.0 License.

🙏 Acknowledgements

PixARMesh builds upon several excellent open-source projects:

Core libraries and frameworks:

We also use physically-based renderings from the 3D-FRONT scenes provided by InstPIFu, along with additional processed assets from DepR.

📝 Citation

If you find PixARMesh useful in your research, please consider citing:

@article{zhang2026pixarmesh,
  title={PixARMesh: Autoregressive Mesh-Native Single-View Scene Reconstruction},
  author={Zhang, Xiang and Yoo, Sohyun and Wu, Hongrui and Li, Chuan and Xie, Jianwen and Tu, Zhuowen},
  journal={arXiv preprint arXiv:2603.05888},
  year={2026}
}

About

(CVPR 2026) PixARMesh: Autoregressive Mesh-Native Single-View Scene Reconstruction

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors