Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents.
300 human-verified tasks | 2,159 rubrics | 9 categories | Completion · Safety · Robustness.
Browse the full leaderboard and individual task cases at claw-eval.github.io.
Evaluation Logic (Updated March 2026):
-
Primary Metric: Pass^3. To eliminate "lucky runs," a model must now consistently pass a task across three independent trials (
$N=3$ ) to earn a success credit. - Strict Pass Criterion: Under the Pass^3 methodology, a task is only marked as passed if the model meets the success criteria in all three runs.
- Reproducibility: We are committed to end-to-end reproducibility. Our codebase is currently being audited to ensure all benchmark results on the leaderboard can be verified by the community.
- Handling API Instability: In the event of execution errors caused by network or API fluctuations, we manually re-trigger the evaluation to ensure exactly 3 trajectories are successfully generated.
-
v1.1.0 — 300 human-verified tasks in 9 categories: Agents perceive, reason, create, and deliver.
-
v1.0.0 — Built on reproducible real-world complexity.
-
v0.0.0 — From chatbot to real world. (2026.3)
300 tasks across 3 splits and 9 categories, each task with human-verified rubrics.
| Split | Count | Description |
|---|---|---|
general |
161 | Core agent tasks across communication, finance, ops, productivity, etc. |
multimodal |
101 | Perception and creation — webpage generation, video QA, document extraction, etc. |
multi_turn |
38 | Conversational tasks with simulated user personas for clarification and advice |
Agents are graded on three dimensions through full-trajectory auditing:
- Completion — did the agent finish the task?
- Safety — did it avoid harmful or unauthorized actions?
- Robustness — does it pass consistently across multiple trials?
Available on Hugging Face: claw-eval/Claw-Eval
| Field | Type | Description |
|---|---|---|
task_id |
string | Unique task identifier |
query |
string | Task instruction / description |
fixture |
list[string] | Fixture files required (available in data/fixtures.tar.gz) |
language |
string | en or zh |
category |
string | Task domain |
We recommend using uv for fast, reliable dependency management:
pip install uv
uv venv --python 3.11
source .venv/bin/activatePrepare your keys and set up the environments with one command:
export OPENROUTER_API_KEY=sk-or-...
export SERP_DEV_KEY=... # add this for tasks need real web search
bash scripts/test_sandbox.shNote on video fixtures: Due to file size limits, this GitHub repository does not include video files for video-related tasks. The complete fixtures (including all videos) are available on Hugging Face: claw-eval/Claw-Eval.
Note on grade: we use gemini-3-flash in general and multimodal tasks while claude opus4.6 for both grader and user-agent in multi_turn tasks!
Go rock 🚀
claw-eval batch --config model_configs/claude_opus_46.yaml --sandbox --trials 3 --parallel 16
# For different tasks, you can follow different config: config_general.yaml/config_multimodal.yaml/config_user_agent.yaml.- More real-world, multimodal tasks in complex productivity environments
- Comprehensive, fine-grained scoring logic with deep state verification
- Enhanced sandbox isolation and full-trace tracking for transparent, scalable evaluation
We welcome any kind of contribution. Let us know if you have any suggestions!
Our test cases are built on the work of the community. We draw from and adapt tasks contributed by OpenClaw, PinchBench, OfficeQA, OneMillion-Bench, Finance Agent, and Terminal-Bench 2.0.
Bowen Ye(PKU), Rang Li (PKU), Qibin Yang (PKU), Zhihui Xie(HKU), Yuanxin Liu(PKU), Linli Yao(PKU), Hanglong Lyu(PKU), Lei Li(HKU, project lead)
Tong Yang (PKU), Zhifang Sui (PKU), Lingpeng Kong (HKU), Qi Liu (HKU)
If you use Claw-Eval in your research, please cite:
@article{ye2026claw,
title={Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents},
author={Ye, Bowen and Li, Rang and Yang, Qibin and Liu, Yuanxin and Yao, Linli and Lv, Hanglong and Xie, Zhihui and An, Chenxin and Li, Lei and Kong, Lingpeng and others},
journal={arXiv preprint arXiv:2604.06132},
year={2026}
}This project is released under the MIT License.