Preflight Checklist
Problem Statement
Problem
Currently, users have no visibility into their token consumption during a conversation unless they explicitly ask Claude. This makes it difficult to:
- Monitor budget/usage in real-time
- Know when approaching the context limit
- Make informed decisions about continuing complex tasks
Proposed Solution
Proposed Solution
Add a persistent, non-intrusive token usage indicator in the Claude Code CLI interface.
Suggested implementation options:
-
Status bar indicator: Display in a corner (e.g., bottom-right)
- Format:
[Tokens: 37K/200K | 18.5%]
- Updates automatically after each interaction
-
Progress bar: Minimal visual bar showing percentage used
- Could use color coding (green → yellow → red as limit approaches)
-
Optional flag: Make it toggleable via config
tokenUsageDisplay: "always" | "percentage" | "never"
Benefits
- Better user awareness of resource consumption
- No token cost - purely UI-based feature
- Helps with planning - users can gauge if they have enough tokens left for complex tasks
- Transparent billing - users understand what they're consuming
Similar Features
Many AI tools show token/credit usage (ChatGPT shows message limits, Cursor shows usage, etc.)
Technical Considerations
- Should not consume any tokens itself
- Update only after user messages and Claude responses (not during streaming)
- Could use the same data already available to Claude via
<budget:token_budget> tags
Environment:
- Claude Code CLI
- Model: Claude Sonnet 4.5
- Platform: Linux (Raspberry Pi)
Alternative Solutions
Current Workaround
Users must ask Claude explicitly: "How many tokens have I used?" - which ironically consumes more tokens.
Priority
High - Significant impact on productivity
Feature Category
API and model interactions
Use Case Example
Use Cases
1. Budget-conscious developers
Developer working on a tight budget needs to implement a feature.
They can monitor token usage and decide:
- "I'm at 45%, I can continue with the refactoring"
- "I'm at 85%, let me save the code review for a new session"
2. Complex multi-step tasks
User asks Claude to:
- Explore a large codebase
- Refactor multiple files
- Write tests
- Create documentation
With visible token usage, they can see if they have enough budget
to complete all steps or need to prioritize.
3. Learning and experimentation
New users learning Claude Code can observe:
- Which operations consume more tokens (file reads, web searches, etc.)
- How different task types impact their budget
- When to break large tasks into multiple sessions
4. Team environments
Teams with shared credits can monitor individual session usage:
- "This debugging session used 120K tokens - we should optimize our approach"
- Helps establish best practices for efficient Claude usage
5. Avoiding mid-task surprises
User is generating a complex PR with multiple file changes.
Without indicator: Suddenly hits limit mid-task, loses context
With indicator: Sees 90% usage, decides to finish current file before limit
Current Workaround
Users must ask Claude explicitly: "How many tokens have I used?" - which ironically consumes more tokens.
Similar Features
Many AI tools show token/credit usage (ChatGPT shows message limits, Cursor shows usage, etc.)
Technical Considerations
- Should not consume any tokens itself
- Update only after user messages and Claude responses (not during streaming)
- Could use the same data already available to Claude via
<budget:token_budget> tags
Additional Context
Current behavior:
- Token budget information is available internally (visible in system messages to Claude)
- Users can ask "how many tokens have I used?" but this:
- Requires manual intervention
- Consumes additional tokens for the question + answer
- Breaks workflow focus
Implementation note:
The data is already available via internal tags like:
<budget:token_budget>200000</budget:token_budget>
<system-warning>Token usage: 37064/200000; 162936 remaining</system-warning>
The feature would simply expose this existing data to the user interface.
User experience consideration:
- Should be non-intrusive (not distracting during deep work)
- Could follow the design language of existing CLI indicators
- Inspiration: tmux status bar, vim status line, git indicators in shells
Configurability:
Some users may not want this always visible, so a config option would be ideal:
// .claude/config.json
{
"ui": {
"showTokenUsage": true,
"tokenDisplayFormat": "percentage"
}
}
Related workflows:
This would pair well with potential future features like:
- Token usage history/analytics
- Per-session usage reports
- Warnings at configurable thresholds (e.g., alert at 80%)
Community feedback:
This feature request comes from real-world usage on a Raspberry Pi development environment where budget awareness is important for cost management.
Preflight Checklist
Problem Statement
Problem
Currently, users have no visibility into their token consumption during a conversation unless they explicitly ask Claude. This makes it difficult to:
Proposed Solution
Proposed Solution
Add a persistent, non-intrusive token usage indicator in the Claude Code CLI interface.
Suggested implementation options:
Status bar indicator: Display in a corner (e.g., bottom-right)
[Tokens: 37K/200K | 18.5%]Progress bar: Minimal visual bar showing percentage used
Optional flag: Make it toggleable via config
tokenUsageDisplay: "always" | "percentage" | "never"
Benefits
Similar Features
Many AI tools show token/credit usage (ChatGPT shows message limits, Cursor shows usage, etc.)
Technical Considerations
<budget:token_budget>tagsEnvironment:
Alternative Solutions
Current Workaround
Users must ask Claude explicitly: "How many tokens have I used?" - which ironically consumes more tokens.
Priority
High - Significant impact on productivity
Feature Category
API and model interactions
Use Case Example
Use Cases
1. Budget-conscious developers
Developer working on a tight budget needs to implement a feature.
They can monitor token usage and decide:
2. Complex multi-step tasks
User asks Claude to:
With visible token usage, they can see if they have enough budget
to complete all steps or need to prioritize.
3. Learning and experimentation
New users learning Claude Code can observe:
4. Team environments
Teams with shared credits can monitor individual session usage:
5. Avoiding mid-task surprises
User is generating a complex PR with multiple file changes.
Without indicator: Suddenly hits limit mid-task, loses context
With indicator: Sees 90% usage, decides to finish current file before limit
Current Workaround
Users must ask Claude explicitly: "How many tokens have I used?" - which ironically consumes more tokens.
Similar Features
Many AI tools show token/credit usage (ChatGPT shows message limits, Cursor shows usage, etc.)
Technical Considerations
<budget:token_budget>tagsAdditional Context
Current behavior:
Implementation note:
The data is already available via internal tags like: