You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Two Claude Code Max subscriptions on the same machine (WSL2 Linux), same project, same settings, same CC version. The only difference: which claude.ai account is logged in.
What happens
On every prompt (even simple 1+1), Login #1 (old account) consumes ~22K more tokens than Login #2 (new account). This is visible in the status bar (real API billing), not just /context.
The extra tokens do not appear in any /context category — they're invisible. All categories sum correctly on Login #2 but leave a ~22K gap on Login #1.
12 project MCP servers (identical config on both logins)
Login Create SECURITY.md #1 additionally has 4 claude.ai connectors (but disabling them didn't close the gap)
Both accounts: Max subscription, claude.ai Memory OFF
Impact
~22K phantom tokens per session = ~2% of 1M context window consumed by something invisible and uncontrollable. On long sessions with many turns, the cumulative overhead grows. Users have no way to see what these tokens are or reduce them.
Two Claude Code Max subscriptions on the same machine (WSL2 Linux), same project, same settings, same CC version. The only difference: which claude.ai account is logged in.
What happens
On every prompt (even simple
1+1), Login #1 (old account) consumes ~22K more tokens than Login #2 (new account). This is visible in the status bar (real API billing), not just/context.The extra tokens do not appear in any
/contextcategory — they're invisible. All categories sum correctly on Login #2 but leave a ~22K gap on Login #1.1+1)/contextheader/contextcategory sumCategories sum to ~56.2K on both logins — but Login #1's header reports ~78.9K. The ~22K difference is invisible in the breakdown.
Screenshots
/contextside-by-side + simple prompt test:Settings comparison (identical on both) + math test:
What we ruled out
Environment
Impact
~22K phantom tokens per session = ~2% of 1M context window consumed by something invisible and uncontrollable. On long sessions with many turns, the cumulative overhead grows. Users have no way to see what these tokens are or reduce them.