Bug type
Behavior bug (incorrect output/state without crash)
Summary
Provider error (Jinja template exception) is misclassified as context overflow, causing misleading error messages and failed auto-compaction loop.
Steps to reproduce
- Configure LM Studio as provider with a Qwen3.x model (e.g. unsloth/qwen3.5-27b or lmstudio-community version)
- Connect OpenClaw via Telegram channel
- Send /new to start a fresh session
- Send any message
Expected behavior
Either the model responds normally, or OpenClaw surfaces the actual underlying error from LM Studio: "Error rendering prompt with jinja template: No user query found in messages."
Actual behavior
OpenClaw reports "Context overflow: prompt too large for the model", triggers auto-compaction, compaction immediately cancels with "no real conversation messages to summarize", and the error repeats on every subsequent message.
OpenClaw version
2026.2.26
Operating system
Ubuntu 24.04
Install method
npm global
Logs, screenshots, and evidence
Session .jsonl inspection showed only 8 lines / 3950 bytes — no real overflow.
Every assistant entry had usage: { input: 0, output: 0 } and stopReason: "error".
Key session entry:
"stopReason":"error","errorMessage":"Error rendering prompt with jinja template: \"No user query found in messages.\""
Gateway log sequence:
- context-overflow-diag: source=assistantError
- context overflow detected (attempt 1/3); attempting auto-compaction
- Compaction safeguard: cancelling compaction with no real conversation messages to summarize.
- auto-compaction failed: Compaction cancelled
Impact and severity
Affected: All users using LM Studio with Qwen3.x models via any channel.
Severity: High — completely blocks the agent from responding.
Frequency: 100% reproducible, occurs on every new session.
Consequence: Misleading error leads users to investigate context window settings (LM Studio / OpenClaw config) instead of the real cause, wasting significant debugging time.
Additional information
Root cause: Qwen3's Jinja template contains a multi-step tool call guard that raises an exception when no "plain" user message is found. OpenClaw's session startup message triggers this guard. The fix is in OpenClaw's error classification logic: stopReason "error" with input tokens = 0 should never be classified as context overflow — the raw provider errorMessage should be surfaced directly.
Workaround: In LM Studio, edit the model's Jinja prompt template and remove the following block:
{%- if ns.multi_step_tool %}
{{- raise_exception('No user query found in messages.') }}
{%- endif %}
Bug type
Behavior bug (incorrect output/state without crash)
Summary
Provider error (Jinja template exception) is misclassified as context overflow, causing misleading error messages and failed auto-compaction loop.
Steps to reproduce
Expected behavior
Either the model responds normally, or OpenClaw surfaces the actual underlying error from LM Studio: "Error rendering prompt with jinja template: No user query found in messages."
Actual behavior
OpenClaw reports "Context overflow: prompt too large for the model", triggers auto-compaction, compaction immediately cancels with "no real conversation messages to summarize", and the error repeats on every subsequent message.
OpenClaw version
2026.2.26
Operating system
Ubuntu 24.04
Install method
npm global
Logs, screenshots, and evidence
Session .jsonl inspection showed only 8 lines / 3950 bytes — no real overflow. Every assistant entry had usage: { input: 0, output: 0 } and stopReason: "error". Key session entry: "stopReason":"error","errorMessage":"Error rendering prompt with jinja template: \"No user query found in messages.\"" Gateway log sequence: - context-overflow-diag: source=assistantError - context overflow detected (attempt 1/3); attempting auto-compaction - Compaction safeguard: cancelling compaction with no real conversation messages to summarize. - auto-compaction failed: Compaction cancelledImpact and severity
Affected: All users using LM Studio with Qwen3.x models via any channel.
Severity: High — completely blocks the agent from responding.
Frequency: 100% reproducible, occurs on every new session.
Consequence: Misleading error leads users to investigate context window settings (LM Studio / OpenClaw config) instead of the real cause, wasting significant debugging time.
Additional information
Root cause: Qwen3's Jinja template contains a multi-step tool call guard that raises an exception when no "plain" user message is found. OpenClaw's session startup message triggers this guard. The fix is in OpenClaw's error classification logic: stopReason "error" with input tokens = 0 should never be classified as context overflow — the raw provider errorMessage should be surfaced directly.
Workaround: In LM Studio, edit the model's Jinja prompt template and remove the following block:
{%- if ns.multi_step_tool %}
{{- raise_exception('No user query found in messages.') }}
{%- endif %}