Building the data foundation of physical intelligence.
ropedia / capture / structure / deliver / deploy
10M+ real-world interactions, 2.5M downloads on Hugging Face. Ropedia builds the data infrastructure for physical AI — capturing, structuring, and delivering machine-usable data for robotics and world models.
Xperience 10M
The largest in-the-wild multi-modal 4D human experience dataset for embodied AI — open for the research community.
downloads 2.5M total · growing
modalities RGB · depth · IMU · MoCap · pose
// pipeline ──────────────────────────────────────────
[capture]──→[sync]──→[annotate]──→[4D-pack]──→[index]
│ │ │ │ │
in-wild 50µs spatial ~1PB open
deploy sync models stored access
// HOMIE — EGO-CENTRIC IN-THE-WILD CAPTURE
One device.
Internet-scale capture.
Head-mounted, ego-centric capture for in-the-wild deployment. Multi-modal spatial and interaction sensing with auto-annotation via spatial foundation models. Lightweight, all-day, works anywhere.
Capture context at the source.
Precision tracking for dexterous manipulation data. High-density multi-view capture with programmable lighting — at a fraction of the cost of comparable lab setups.

// R-DOME — HAND-OBJECT INTERACTION TRACKING
INFRA ║
Build the future of physical intelligence.
✓ Embodied AI · ✓Robotic Systems · ✓ World Models