Problem
LLMs frequently wrap output in markdown code fences (triple backticks). When an LLM outputs a MEDIA: token inside a code block, splitMediaFromOutput() correctly skips it (by design — you don't want code examples treated as media). However, this creates a silent failure mode that's very hard to debug:
Here's the screenshot:
` ` `
MEDIA:/home/user/media/screenshot.png
` ` `
The user sees the raw MEDIA: path as plain text instead of receiving the file. There's no error, no warning — the media just doesn't get delivered.
Why This Is a Problem
-
LLMs naturally format paths in code blocks. Even with explicit instructions not to, models like GPT-4, Claude, GLM, and others frequently wrap file paths in backticks or code fences. This is especially likely when workspace docs show MEDIA: examples inside code fences (as the default TOOLS.md template does).
-
Silent failure. The user has no indication that media delivery failed — it just looks like the agent sent a path as text. There's no log warning either.
-
New installs have no way to know this. There's nothing in the default workspace files or documentation warning about this behavior. Users discover it only after debugging why media isn't being delivered.
Suggested Fixes (any or all)
-
Extract MEDIA tokens from code fences too. A MEDIA: token is unlikely to appear in legitimate code examples. If it's on its own line inside a code block and points to a valid file, it's almost certainly intended as media.
-
Log a warning when a MEDIA: token is detected inside a code fence. Something like: "Skipped MEDIA token inside code fence — if this was intentional, remove the code formatting." This would at least make the failure visible.
-
Add a note to default workspace templates (TOOLS.md or similar) warning that MEDIA tokens must not be wrapped in code fences or backticks.
Environment
- OpenClaw v2026.3.8
- Observed with GLM-4.7-Flash via
openai-completions, but applies to any LLM
Problem
LLMs frequently wrap output in markdown code fences (triple backticks). When an LLM outputs a
MEDIA:token inside a code block,splitMediaFromOutput()correctly skips it (by design — you don't want code examples treated as media). However, this creates a silent failure mode that's very hard to debug:The user sees the raw
MEDIA:path as plain text instead of receiving the file. There's no error, no warning — the media just doesn't get delivered.Why This Is a Problem
LLMs naturally format paths in code blocks. Even with explicit instructions not to, models like GPT-4, Claude, GLM, and others frequently wrap file paths in backticks or code fences. This is especially likely when workspace docs show
MEDIA:examples inside code fences (as the default TOOLS.md template does).Silent failure. The user has no indication that media delivery failed — it just looks like the agent sent a path as text. There's no log warning either.
New installs have no way to know this. There's nothing in the default workspace files or documentation warning about this behavior. Users discover it only after debugging why media isn't being delivered.
Suggested Fixes (any or all)
Extract MEDIA tokens from code fences too. A
MEDIA:token is unlikely to appear in legitimate code examples. If it's on its own line inside a code block and points to a valid file, it's almost certainly intended as media.Log a warning when a
MEDIA:token is detected inside a code fence. Something like:"Skipped MEDIA token inside code fence — if this was intentional, remove the code formatting."This would at least make the failure visible.Add a note to default workspace templates (TOOLS.md or similar) warning that MEDIA tokens must not be wrapped in code fences or backticks.
Environment
openai-completions, but applies to any LLM