-
-
Notifications
You must be signed in to change notification settings - Fork 53k
Description
Feature Request: Display cost estimates in session_status
Summary
Add cost estimation to /status and session_status output based on local pricing config and LiteLLM database.
Problem
Currently, session_status shows token usage but no cost information:
🧮 Tokens: 84k in / 396 out
Users need to manually calculate costs or use external tools to track spending. This makes it difficult to:
- Monitor session costs in real-time
- Make informed decisions about model selection
- Track spending across conversations
- Set budget limits
Proposed Solution
Enhance session_status to display cost estimates when pricing data is available:
🧮 Tokens: 84k in / 396 out
💰 Cost: $0.252 in / $0.006 out ≈ $0.26 total
Implementation Approaches
Option 1: Provider-reported costs (when available)
- Use cost data from provider responses (OpenRouter, some APIs include this)
- Most accurate, but only works for supported providers
Option 2: Local config-based estimation
- Calculate from
models.providers[].models[].costin openclaw.json - Works for all models if pricing is configured
- User-controlled, no API dependency
Option 3: LiteLLM database integration
- Use LiteLLM's pricing database (2479+ models, updated daily)
- GitHub: https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json
- Covers most major providers (OpenAI, Anthropic, Google, AWS, Azure, etc.)
- Can be cached locally, updated via cron
Recommended: Option 2 + 3 (local config as fallback, LiteLLM as primary source)
Use Cases
1. Real-time cost monitoring
User runs /status during a conversation to check current spend:
💰 Cost: $0.26 (84k tokens)
2. Model comparison
User compares costs before switching models:
Current: claude-sonnet-4-5 → $0.26
Alternative: gpt-4o → $0.19 (-27%)
3. Budget awareness
User sees cumulative costs approaching a limit:
Session cost: $2.45 / $5.00 daily budget
Configuration
Optional config to control display:
{
"agents": {
"defaults": {
"showCostInStatus": true, // default: true when pricing available
"costPricingSource": "auto" // auto|litellm|config|provider
}
}
}Benefits
- Transparency: Users see costs immediately
- Cost awareness: Helps prevent surprise bills
- Model selection: Informed decisions about model tradeoffs
- Budget control: Foundation for future budget limits/warnings
- No API calls: Uses local data (config or cached LiteLLM pricing)
Implementation Notes
- Cost should be marked as "estimate" when calculated locally
- Show "Cost: unavailable" if no pricing data exists
- Include cache read/write costs when supported
- Format with currency symbol and appropriate precision ($0.0234)
Related Issues
- Feature: Expose OpenRouter usage cost to agent runtime #9016 - Expose OpenRouter usage cost to agent runtime (provider-specific)
- This feature request is broader: applies to all providers via local estimation
Additional Context
Users have already built workarounds:
- External cost tracking tools (e.g., clawdbot-cost-monitor on GitHub)
- Custom scripts that parse session data and calculate costs
- Manual tracking in spreadsheets
Native support would improve UX and reduce friction for cost-conscious users.