Problem / Motivation
Clawdbot already shows helpful response prefixes (model, provider, thinking level), but users still have to run a separate /status command to see context usage. That breaks flow and makes it easy to accidentally hit the context limit. Other tools (Codex/Claude Code) show a live context indicator in-line, which helps users decide when to compact or start a fresh session.
Goal: expose context window usage as a template variable so it can appear in every response header with no extra commands.
Current Behavior (Before)
Response prefix config:
{
"messages": {
"responsePrefix": "[{model} | think:{think}]"
}
}
Output example:
[claude-opus-4-5 | think:high] Here’s your response...
To see context usage, users must run /status:
📚 Context: 45000/200000 (23%) · 🧹 Compactions: 1
Desired Behavior (After)
Response prefix config:
{
"messages": {
"responsePrefix": "[{model} | think:{think} | {context}]"
}
}
Output example:
[claude-opus-4-5 | think:high | 23%] Here’s your response...
Optional richer template:
{
"messages": {
"responsePrefix": "[{model} | {contextUsed}/{contextMax} | {compactions}x]"
}
}
Proposed Solution
Add new template variables to the response prefix interpolation system:
{context} / {contextPercent} → percentage of context used (e.g., 23%)
{contextUsed} → tokens used (e.g., 45000)
{contextMax} → max tokens (e.g., 200000)
{compactions} → number of compactions (e.g., 1)
Technical Implementation Notes
Files of interest
dist/auto-reply/reply/response-prefix-template.js
- Dispatcher(s):
dist/telegram/bot-message-dispatch.js (and similar in discord/slack)
1) Extend prefix context type
Add optional fields:
interface ResponsePrefixContext {
model?: string;
modelFull?: string;
provider?: string;
thinkingLevel?: string;
identityName?: string;
// new
contextPercent?: number;
contextUsed?: number;
contextMax?: number;
compactions?: number;
}
2) Add template variable cases
case "context":
case "contextpercent":
return context.contextPercent != null ? `${context.contextPercent}%` : match;
case "contextused":
return context.contextUsed != null ? String(context.contextUsed) : match;
case "contextmax":
return context.contextMax != null ? String(context.contextMax) : match;
case "compactions":
return context.compactions != null ? String(context.compactions) : match;
3) Populate context stats
In each dispatcher, extend prefixContext when session stats are available:
prefixContext.contextPercent = Math.round((session.totalTokens / session.maxTokens) * 100);
prefixContext.contextUsed = session.totalTokens;
prefixContext.contextMax = session.maxTokens;
prefixContext.compactions = session.compactionCount;
If context stats are computed later than model selection, consider adding a small hook (e.g., onContextUpdate) or deferring template resolution until stats are known.
4) Docs / schema
Document new variables in config schema and help text for messages.responsePrefix.
Edge Cases / Considerations
- If stats are unavailable, leave the
{context*} placeholders untouched (current behavior for unresolved vars).
- First message or no session data: choose whether to show
0% or omit.
- Performance: compute once per response; no heavy recalculation required.
Why This Is Valuable
- Proactive context awareness without extra commands
- Aligns Clawdbot UX with Codex/Claude Code
- Minimal config change for users, backwards compatible by default
Acceptance Criteria
Problem / Motivation
Clawdbot already shows helpful response prefixes (model, provider, thinking level), but users still have to run a separate
/statuscommand to see context usage. That breaks flow and makes it easy to accidentally hit the context limit. Other tools (Codex/Claude Code) show a live context indicator in-line, which helps users decide when to compact or start a fresh session.Goal: expose context window usage as a template variable so it can appear in every response header with no extra commands.
Current Behavior (Before)
Response prefix config:
{ "messages": { "responsePrefix": "[{model} | think:{think}]" } }Output example:
To see context usage, users must run
/status:Desired Behavior (After)
Response prefix config:
{ "messages": { "responsePrefix": "[{model} | think:{think} | {context}]" } }Output example:
Optional richer template:
{ "messages": { "responsePrefix": "[{model} | {contextUsed}/{contextMax} | {compactions}x]" } }Proposed Solution
Add new template variables to the response prefix interpolation system:
{context}/{contextPercent}→ percentage of context used (e.g.,23%){contextUsed}→ tokens used (e.g.,45000){contextMax}→ max tokens (e.g.,200000){compactions}→ number of compactions (e.g.,1)Technical Implementation Notes
Files of interest
dist/auto-reply/reply/response-prefix-template.jsdist/telegram/bot-message-dispatch.js(and similar in discord/slack)1) Extend prefix context type
Add optional fields:
2) Add template variable cases
3) Populate context stats
In each dispatcher, extend
prefixContextwhen session stats are available:If context stats are computed later than model selection, consider adding a small hook (e.g.,
onContextUpdate) or deferring template resolution until stats are known.4) Docs / schema
Document new variables in config schema and help text for
messages.responsePrefix.Edge Cases / Considerations
{context*}placeholders untouched (current behavior for unresolved vars).0%or omit.Why This Is Valuable
Acceptance Criteria
{context},{contextPercent},{contextUsed},{contextMax},{compactions}are supported.