Summary
When an upstream LLM provider (in our case LM Studio + Qwen3) returns a Jinja prompt-template rendering error such as:
Error rendering prompt with jinja template: "No user query found in messages."
OpenClaw's context-overflow-diag classifies it as a context overflow and tells the user to Try /reset. Resetting and replaying the same startup prompt triggers the exact same template error again → user-visible infinite loop.
Repro
- Provider: LM Studio (
http://<host>:8888) serving qwen/qwen3.6-35b-a3b
- Agent: any OpenClaw agent (Mattermost in our case)
- Action: send
/reset (or have the runtime build any startup prompt whose first role: user message begins with System: [...])
- Observed gateway log (representative):
provider=lmstudio/qwen/qwen3.6-35b-a3b
error_message=Error rendering prompt with jinja template: "No user query found in messages."
context-overflow-diag: observedTokens=unknown, compactionAttempts=0
→ user message: "prompt too large… Try /reset (or /new)… use a larger-context model."
Note observedTokens=unknown and compactionAttempts=0 — this is a clear signal it is not an actual overflow.
Expected behavior
context-overflow-diag should recognize provider responses matching Error rendering prompt with jinja template (and similar template/serialization errors) and classify them as provider-template-error (or similar). The user-facing message should be something like:
The model's prompt template failed to render the messages. This is usually a model/template problem on the provider side, not a context-window issue. Fix the prompt template (e.g. in LM Studio) or switch to another model.
This avoids the misleading Try /reset advice that puts the user in a reset loop.
Why it matters
The current behavior:
- Misleads the user about the root cause.
- Causes a reset loop because
/reset re-sends the same prompt shape that the template can't render.
- Hides genuine provider/template bugs behind a generic overflow message.
Pointers (in installed dist)
grep -rE "context-overflow-diag|observedTokens|compactionAttempts" /opt/homebrew/lib/node_modules/openclaw/dist
The classifier seems to fall through to overflow whenever it can't compute a token count, regardless of the underlying error string.
Workaround we used
Replacing the model's Jinja chat template with a role-trusting ChatML template (does not parse message content, just emits <|im_start|>{role}\n{content}<|im_end|>\n per message) makes the error go away. But the diagnostic misclassification on OpenClaw's side is independent and should still be fixed.
Summary
When an upstream LLM provider (in our case LM Studio + Qwen3) returns a Jinja prompt-template rendering error such as:
OpenClaw's
context-overflow-diagclassifies it as a context overflow and tells the user toTry /reset. Resetting and replaying the same startup prompt triggers the exact same template error again → user-visible infinite loop.Repro
http://<host>:8888) servingqwen/qwen3.6-35b-a3b/reset(or have the runtime build any startup prompt whose firstrole: usermessage begins withSystem: [...])Note
observedTokens=unknownandcompactionAttempts=0— this is a clear signal it is not an actual overflow.Expected behavior
context-overflow-diagshould recognize provider responses matchingError rendering prompt with jinja template(and similar template/serialization errors) and classify them asprovider-template-error(or similar). The user-facing message should be something like:This avoids the misleading
Try /resetadvice that puts the user in a reset loop.Why it matters
The current behavior:
/resetre-sends the same prompt shape that the template can't render.Pointers (in installed dist)
grep -rE "context-overflow-diag|observedTokens|compactionAttempts" /opt/homebrew/lib/node_modules/openclaw/distThe classifier seems to fall through to overflow whenever it can't compute a token count, regardless of the underlying error string.
Workaround we used
Replacing the model's Jinja chat template with a role-trusting ChatML template (does not parse message content, just emits
<|im_start|>{role}\n{content}<|im_end|>\nper message) makes the error go away. But the diagnostic misclassification on OpenClaw's side is independent and should still be fixed.