4RC (pronounced "ARC") enables unified and complete 4D reconstruction via conditional querying from monocular videos in a single feed-forward pass.
🎇 For more visual results, go checkout our project page
Introducing 4RC
We present 4RC, a unified feed-forward framework for 4D reconstruction from monocular videos. Unlike existing methods that typically decouple motion from geometry or produce limited 4D attributes, such as sparse trajectories or two-view scene flow, 4RC learns a holistic 4D representation that jointly captures dense scene geometry and motion dynamics. At its core, 4RC introduces a novel encode-once, query-anywhere and anytime paradigm: a transformer backbone encodes the entire video into a compact spatio-temporal latent space, from which a conditional decoder can efficiently query 3D geometry and motion for any query frame at any target timestamp. To facilitate learning, we represent per-view 4D attributes in a minimally factorized form, decomposing them into base geometry and time-dependent relative motion. Extensive experiments demonstrate that 4RC outperforms prior and concurrent methods across a wide range of 4D reconstruction tasks.
- [2026/04/13] Our inference code and weights are released!
-
Clone Repo
git clone https://github.com/Luo-Yihang/4RC cd 4RC -
Create Conda Environment
conda create -n 4rc python=3.11 cmake=3.14.0 -y conda activate 4rc
-
Install Python Dependencies
Important: Install Torch based on your CUDA version. For example, for Torch 2.8.0 + CUDA 12.6:
# Install Torch pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu126 # Install other dependencies pip install -r requirements.txt # Install 4RC as a package pip install -e .
You can now try 4RC with the following code. The checkpoint will be downloaded automatically from Hugging Face.
import torch
from arc.models.arc.arc import Arc
from arc.dust3r.inference_multiview import inference
from arc.dust3r.utils.image import load_images
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Arc.from_pretrained("Luo-Yihang/4RC").to(device)
model.eval()
example_dir = "examples/robot_arm"
images = load_images(example_dir, size=512, patch_size=14, verbose=True)
with torch.no_grad():
predictions, profiling = inference(
images,
model,
device,
dtype="bf16-mixed",
profiling=True,
verbose=True,
use_center_as_anchor=False,
)Launch the interactive Gradio demo:
python app.pyFor the command-line workflow without the Gradio UI, use the two-step pipeline:
Step 1: Run inference and save to .npz:
python inference.py --input ./examples/robot_arm --save result.npz[Optional]
- Use
--refine_track_visualizationto enable VLA + SAM2 to auto-segment dynamic objects and filter their trajectories for better visulization. - Use
--checkpoint_dir Luo-Yihang/4RC_geofinetuneto use the checkpoint finetuned on more geometry datasets for even better geometry prediction.
Step 2: Visualize with viser directly from .npz:
python arc/viz/viser_visualizer_track.py --npz_path result.npz --port 8020Open http://localhost:8020 in your browser to interact with the 3D visualization.
4RC/
├── arc/
│ ├── models/
│ │ └── arc/
│ ├── dust3r/
│ ├── croco/
│ └── viz/
├── assets/
├── examples/
├── app.py
├── inference.py
├── requirements.txt
├── setup.py
└── README.md
🐎 Pushing the bandwidth limit!
- Release evaluation code.
- Release training code.
If you find our repo useful for your research, please consider citing our paper:
@article{luo20264rc,
title = {4RC: 4D Reconstruction via Conditional Querying Anytime and Anywhere},
author = {Yihang Luo and Shangchen Zhou and Yushi Lan and Xingang Pan and Chen Change Loy},
journal = {arXiv preprint arXiv:2602.10094},
year = {2026}
}We recognize several concurrent works on the 4D reconstruction. We encourage you to check them out:
St4RTrack | TraceAnything | V-DPM | Any4D | D4RT
4RC is built on the shoulders of several outstanding open-source projects. Many thanks to the following exceptional projects:
DA3 | VGGT | Fast3R | DUSt3R | Viser
If you have any questions, please feel free to reach us at luo_yihang@outlook.com.



