Skip to content

AMAP-ML/Omni-WorldBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 

Repository files navigation

Omni-WorldBench: Towards a Comprehensive Interaction-Centric Evaluation for World Models

🔥 Updates

  • [2026/03] Omni-WorldBench is available on arXiv !

📣 Overview

Video-based world models have emerged along two dominant paradigms: video generation and 3D reconstruction. However, existing evaluation benchmarks either focus narrowly on visual fidelity and text–video alignment for generative models, or rely on static 3D reconstruction metrics that fundamentally neglect temporal dynamics. We argue that the future of world modeling lies in 4D generation, which jointly models spatial structure and temporal evolution. In this paradigm, the core capability is interactive response: the ability to faithfully reflect how interaction actions drive state transitions across space and time. Yet no existing benchmark systematically evaluates this critical dimension. To address this gap, we propose Omni-WorldBench, a comprehensive benchmark specifically designed to evaluate the interactive response capabilities of world models in 4D settings. Omni-WorldBench comprises two key components: Omni-WorldSuite, a systematic prompt suite spanning diverse interaction levels and scene types; and Omni-Metrics, an agent-based evaluation framework that quantifies world modeling capabilities by measuring the causal impact of interaction actions on both final outcomes and intermediate state evolution trajectories. We conduct extensive evaluations of 18 representative world models across multiple paradigms. Our analysis reveals critical limitations of current world models in interactive response, providing actionable insights for future research. Omni-WorldBench will be publicly released to foster progress in interactive 4D world modeling.

📊Evaluation Results

Gallery

Prompt: A baseball player, standing on the field, throws a baseball as high and as far as he can with all his might.

Wan2.2-1.mp4
Wan2.1-1.mp4
Cosmoso-1.mp4
OpenSora-1.mp4
CogVideo-1.mp4
HunyuanVideo-1.mp4

Prompt: A robotic arm with a black gripper hovers above a white tabletop displaying various packaged foods (including a plastic bag of bread and a bag of snacks). The background shows shelves and a metal grid structure; the robotic arm then grasps a bag of potato chips and places it into the blue shopping basket on the right.

Wan2.2-2.mp4
Wan2.1-2.mp4
Cosmoso-2.mp4
OpenSora-2.mp4
CogVideo-2.mp4
HunyuanVideo-2.mp4

Prompt: Camera View Trajectory-Forward: The camera glides steadily forward along an ancient library corridor, flanked by towering bookshelves.

HunyuanWorld-1.mp4
Gen3c-1.mp4
FantasyWorld-1.mp4
HunyuanGameCraft-1.mp4
Lingbot-1.mp4
ViewCrafter-1.mp4

Prompt: Camera View Trajectory-Pan Left: Turning left in the center of the basketball court, looking at the surrounding spectator seats.

HunyuanWorld-2.mp4
Gen3c-2.mp4
FantasyWorld-2.mp4
HunyuanGameCraft-2.mp4
Lingbot-2.mp4
ViewCrafter-2.mp4

Quantitative Results

Omni-WorldBench Leaderboard

✏️Citation

If you find our repo useful for your research, please consider citing our paper:

@article{wu2026omniworldbenchcomprehensiveinteractioncentricevaluation,
title={Omni-WorldBench: Towards a Comprehensive Interaction-Centric Evaluation for World Models},
author={Meiqi Wu and Zhixin Cai and Fufangchen Zhao and Xiaokun Feng and Rujing Dang and Bingze Song and Ruitian Tian and Jiashu Zhu and Jiachen Lei and Hao Dou and Jing Tang and Lei Sun and Jiahong Wu and Xiangxiang Chu and Zeming Liu and Kaiqi Huang},
journal={arXiv preprint arXiv:2603.22212},
year={2026}
}

About

A comprehensive benchmark specifically designed to evaluate the interactive response capabilities of world models in 4D settings.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors