-
-
Notifications
You must be signed in to change notification settings - Fork 53k
Description
Summary
After updating to recent versions (worked on ~v2026.2.17, broken on v2026.2.19+), every message — including a single "hi?" on a fresh /reset session — triggers:
Context overflow: prompt too large for the model. Try /reset (or /new) to start a fresh session, or use a larger-context model.
The model is openrouter/moonshotai/kimi-k2.5 (262K context). The system prompt is ~26K chars (~6K tokens). This cannot be a real context overflow.
Environment
- OpenClaw version:
coollabsio/openclaw:latest(Docker, pulled ~2026-02-21) - Model:
openrouter/moonshotai/kimi-k2.5(262,144 token context) - Channel: Discord
- Hosting: Docker on Hetzner VPS via Coolify
- Previously working: Yes, ~v2026.2.17 (last confirmed working)
Evidence that this is NOT a real context overflow
From the sessions.json system prompt report on this exact session:
{
"systemPrompt": {
"chars": 26223,
"projectContextChars": 15040,
"nonProjectContextChars": 11183
},
"inputTokens": 20286,
"contextTokens": 262144
}- 26K chars system prompt against a 262K token context window
- All 8 workspace files total ~16K chars — none truncated
- Tool schemas: ~18K chars
- Skills metadata: ~2K chars
- After
/reset, conversation history is empty
20K input tokens vs 262K context = 7.7% utilization. This cannot overflow.
Reproduction
- Set model to
openrouter/moonshotai/kimi-k2.5 - Run
/resetor/newin Discord - Send any message (e.g., "hi?")
- Immediate "Context overflow: prompt too large for the model" error
Likely root cause
This matches two known issues:
- FailoverError on rate limit/billing is misreported as 'Context overflow' #10368:
isContextOverflowError()/isLikelyContextOverflowError()misclassifies rate-limit, billing, or provider errors as "context overflow" due to overly broad regex matching on response text - [Bug]: Context overflow error #5771: Same error detection scans response text rather than API error codes, causing false positives
The actual underlying error is likely a provider-side issue from Chutes (the only OpenRouter provider for Kimi K2.5, running int4 quantized Kimi-K2.5-TEE) being surfaced as a context overflow.
Version correlation
- v2026.2.19 introduced ACP prompt payload changes ("bound ACP prompt text payloads to 2 MiB") and provider routing fixes
- v2026.2.21 modified overflow compaction retry budgeting
- The regression appeared somewhere between v2026.2.17 (working) and v2026.2.19+ (broken)
Expected behavior
- The actual error from the provider should be shown (rate limit, billing, provider unavailable, etc.)
isContextOverflowError()should not match non-overflow errors- If the error genuinely is from the provider, a fallback or clearer message would help
Request
Could you check what error OpenRouter/Chutes is actually returning? The "context overflow" label appears to be masking the real issue. The fix in PR #12988 (for #10368) may not cover this specific provider error pattern.