Skip to content

Conversation

@hannesrudolph
Copy link
Collaborator

@hannesrudolph hannesrudolph commented Nov 13, 2025

Summary

  • Normalize usage in Roo provider so Anthropic-protocol models report non-cached input tokens while non-Anthropic models keep total prompt_tokens.

Changes

  • Roo provider: determine protocol and normalize input tokens:
    • Anthropic models: inputTokens = prompt_tokens − cache_write − cache_read
    • Non-Anthropic models: inputTokens = prompt_tokens
    • Implementation: roo.ts, protocol via getApiProtocol()
  • Core (Task): revert prior heuristic to keep provider-agnostic logic:

Why

  • Roo backend usage reports prompt_tokens in OpenAI-like shape. For Anthropic-prefixed models, treating prompt_tokens as “non-cached” would double count cached tokens when totals are derived. Normalizing at the provider ensures consistent semantics and prevents double counting.

Verification

  • Unit tests pass (vitest).
  • Manual: run a Roo Anthropic model; confirm api_req_started tokensIn equals non-cached input (not including cacheWrites/cacheReads), and contextTokens = tokensIn + tokensOut once.

References


Important

Normalizes input token usage in RooHandler for Anthropic models to report non-cached tokens, maintaining provider-agnostic core logic.

  • Behavior:
    • Normalizes input token usage in RooHandler for Anthropic models to report non-cached tokens (prompt_tokens - cache_write - cache_read).
    • Non-Anthropic models continue to report total prompt_tokens.
    • Reverts Task-level heuristic to keep core provider-agnostic.
  • Functions:
    • Uses getApiProtocol() in RooHandler to determine protocol and adjust token reporting.
  • Misc:
    • All tests pass using vitest.

This description was created by Ellipsis for 0c4ca55. You can customize this summary. It will automatically update as commits are pushed.

…d double-counting cached tokens; revert core-level heuristic to keep Task provider-agnostic
@hannesrudolph hannesrudolph marked this pull request as ready for review November 13, 2025 06:26
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. bug Something isn't working labels Nov 13, 2025
@roomote
Copy link
Contributor

roomote bot commented Nov 13, 2025

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

The implementation correctly normalizes input token reporting based on API protocol:

  • Anthropic models report non-cached tokens (prompt_tokens - cache_write - cache_read)
  • OpenAI models report total prompt_tokens
  • Protocol detection and token calculation are implemented correctly
  • Changes maintain backward compatibility and consistency with core logic

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Nov 13, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Nov 13, 2025
@mrubens mrubens merged commit 4e6cdad into main Nov 13, 2025
31 checks passed
@mrubens mrubens deleted the fix/roo-anthropic-input-tokens-normalization branch November 13, 2025 06:46
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Nov 13, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Nov 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

3 participants