Skip to content

[Feature]: Invoke tools directly without LLM round-trip #37131

@ChrisJaunes

Description

@ChrisJaunes

Summary

Add support for invoking OpenClaw tools (exec, read, write, message, MCP skills, etc.) directly from a chat message or slash command, bypassing the LLM inference step entirely.

Problem to solve

Currently, every tool call in OpenClaw must go through an LLM turn: the user sends a message → the model decides to call a tool → the tool executes → the model summarizes the result. This round-trip is unnecessary for deterministic, well-defined operations where the intent is unambiguous.
Common pain points:

  • Running a known shell command ("show me disk usage") burns tokens just to trigger a one-liner
  • Calling a specific MCP skill tool with fixed inputs requires a full LLM turn
  • Scheduled cron jobs using systemEvent can inject text, but can't directly fire a tool
  • Power users who know exactly which tool they want are forced to pay latency + cost for an LLM that adds no value in these cases

Proposed solution

Introduce a direct tool invocation path — a way to call a tool with explicit inputs, skipping model inference entirely. Possible surfaces:

  1. Slash command syntax:
    /tool exec command="df -h"
    /tool read path="/etc/hosts"
    /tool message target="alice" message="hello"
  2. CLI flag:
    openclaw agent --tool exec --args '{"command": "df -h"}'
  3. Cron payload kind:
    {
    "kind": "toolCall",
    "tool": "exec",
    "args": { "command": "df -h" }
    }
    The result could be
    returned directly to the session as a synthetic tool result (injected into conversation history), or delivered to the channel as a formatted reply.
    Use cases
  • Run shell commands on a schedule without burning tokens on LLM inference
  • Directly invoke a specific MCP skill tool (e.g., weather lookup, KM search) with known parameters
  • Build lightweight automations where the tool call is deterministic and intent is explicit
  • Reduce latency and API cost for high-frequency or time-sensitive operations

Alternatives considered

  • Thin system prompt + forced tool call: Works but still requires a full LLM turn and wastes tokens on a trivial decision
  • Cron agentTurn with explicit instructions: Closest current workaround, but spins up a full isolated session for what should be a single function call
  • Direct shell / API calls outside OpenClaw: Bypasses the tool ecosystem entirely and loses observability, auth handling, and channel delivery

Impact

  • Affected users: Power users, automation builders, cron-heavy deployments
  • Frequency: Every time a user wants a deterministic action that doesn't need AI reasoning
  • Consequence: Unnecessary token cost, added latency, and cognitive overhead of writing "please run X" instead of just running X

Evidence/examples

No response

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions