Skip to content

fix(context-overflow-diag): jinja template "No user query" errors are misclassified as context overflow #68868

@allenliang2022

Description

@allenliang2022

Summary

When an upstream LLM provider (in our case LM Studio + Qwen3) returns a Jinja prompt-template rendering error such as:

Error rendering prompt with jinja template: "No user query found in messages."

OpenClaw's context-overflow-diag classifies it as a context overflow and tells the user to Try /reset. Resetting and replaying the same startup prompt triggers the exact same template error again → user-visible infinite loop.

Repro

  • Provider: LM Studio (http://<host>:8888) serving qwen/qwen3.6-35b-a3b
  • Agent: any OpenClaw agent (Mattermost in our case)
  • Action: send /reset (or have the runtime build any startup prompt whose first role: user message begins with System: [...])
  • Observed gateway log (representative):
provider=lmstudio/qwen/qwen3.6-35b-a3b
error_message=Error rendering prompt with jinja template: "No user query found in messages."
context-overflow-diag: observedTokens=unknown, compactionAttempts=0
→ user message: "prompt too large… Try /reset (or /new)… use a larger-context model."

Note observedTokens=unknown and compactionAttempts=0 — this is a clear signal it is not an actual overflow.

Expected behavior

context-overflow-diag should recognize provider responses matching Error rendering prompt with jinja template (and similar template/serialization errors) and classify them as provider-template-error (or similar). The user-facing message should be something like:

The model's prompt template failed to render the messages. This is usually a model/template problem on the provider side, not a context-window issue. Fix the prompt template (e.g. in LM Studio) or switch to another model.

This avoids the misleading Try /reset advice that puts the user in a reset loop.

Why it matters

The current behavior:

  1. Misleads the user about the root cause.
  2. Causes a reset loop because /reset re-sends the same prompt shape that the template can't render.
  3. Hides genuine provider/template bugs behind a generic overflow message.

Pointers (in installed dist)

grep -rE "context-overflow-diag|observedTokens|compactionAttempts" /opt/homebrew/lib/node_modules/openclaw/dist

The classifier seems to fall through to overflow whenever it can't compute a token count, regardless of the underlying error string.

Workaround we used

Replacing the model's Jinja chat template with a role-trusting ChatML template (does not parse message content, just emits <|im_start|>{role}\n{content}<|im_end|>\n per message) makes the error go away. But the diagnostic misclassification on OpenClaw's side is independent and should still be fixed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions