Skip to content

Official implementation of "Action-Sketcher: From Reasoning to Action via Visual Sketches for Long-Horizon Robotic Manipulation"

License

Notifications You must be signed in to change notification settings

FlagOpen/Action-Sketcher

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Action-Sketcher: From Reasoning to Action via Visual Sketches for Long-Horizon Robotic Manipulation

Make intent visible. Make action reliable.

arXiv   Project Homepage   Benchmark   Weights

🔥 Overview

Action-Sketcher operates in a See-Think-Sketch-Act loop, where a foundation model first performs temporal and spatial reasoning to decompose a high-level instruction (e.g., "Clean the objects on the table") into a subtask and a corresponding Visual Sketch. This sketch, composed of primitives like points, boxes, and arrows, serves as an explicit, human-readable plan that guides a low-level policy to generate robust action sequences. This methodology enables three key capabilities: (bottom left) long-horizon planning through task decomposition, (bottom middle) explicit spatial reasoning by grounding instructions in scene geometry, and (bottom right) seamless human-in-the-loop adaptability via direct sketch correction and intent supervision.

🗞️ News

  • 2026-01-05: ✨ Codes, Dataset and Weights are coming soon! Stay tuned for updates.
  • 2026-01-05: 🔥 We released our Project Page of Action-Sketcher.

🎯 TODO

  • Release the model checkpoints and inference codes (About 3 week).
  • Release the full dataset and training codes (About 1 month).
  • Release the dataset generation pipeline and GUI tools (Maybe 1 month or more).

🤖 Method

The Action-Sketcher framework is model-agnostic and can be integrated with any VLA model with an event-driven loop that (i) summarizes the next subtask, (ii) emits a compact Visual Sketch (points, boxes, arrows, relations) that externalizes spatial intent, and (iii) synthesizes an action chunk conditioned on that sketch and the robot state. The explicit intermediate supports targeted supervision, on-the-fly correction, and reliable long-horizon execution within a single-model architecture.

Logo

✨ Experiments

Logo

📑 Citation

If you find our work helpful, feel free to cite it:

@article{tan2026action,
  title={Action-Sketcher: From Reasoning to Action via Visual Sketches for Long-Horizon Robotic Manipulation},
  author={Tan, Huajie and Co, Peterson and Xu, Yijie and Rong, Shanyu and Ji, Yuheng and Chi, Cheng and others},
  journal={arXiv preprint arXiv:2601.01618},
  year={2026}
}

About

Official implementation of "Action-Sketcher: From Reasoning to Action via Visual Sketches for Long-Horizon Robotic Manipulation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published