You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When an Ollama model (specifically ollama/glm-5:cloud in my case) hits its internal context limit, OpenClaw shows a cryptic error message in the TUI instead of handling it gracefully:
run error: Ollama API error 400: {"StatusCode":400,"Status":"400 Bad Request","error":"prompt too long; exceeded max context length by 4 tokens"}
connected | error
Expected Behavior
Proactive prevention: Auto-compact should trigger before hitting the limit (currently waits for idle + 60% threshold, but Ollama's internal limit can be hit before OpenClaw's tracking indicates it's time)
Graceful degradation: If overflow occurs, show a helpful message like "Context limit reached. Starting fresh conversation..." and auto-compact/reset
Recovery: Don't leave the session in an "error" state - either auto-compact and retry, or guide the user to start fresh
Current Behavior
Shows cryptic error message
Session appears "broken" until user manually starts new conversation
Auto-compact doesn't trigger because it waits for idle state
Environment
OpenClaw: latest
Model: ollama/glm-5:cloud (Ollama Cloud)
Surface: openclaw-tui (gateway-client)
Root Cause
OpenClaw's token tracking may not match Ollama's actual context usage
Auto-compact is reactive (waits for idle) rather than proactive
No graceful handling when the 400 error occurs - just surfaces the raw API error
Description
When an Ollama model (specifically
ollama/glm-5:cloudin my case) hits its internal context limit, OpenClaw shows a cryptic error message in the TUI instead of handling it gracefully:Expected Behavior
Current Behavior
Environment
ollama/glm-5:cloud(Ollama Cloud)Root Cause
Related Issues
Suggested Fix
/compactand retry/new