feat: add cost calculation metric#65
Conversation
032fb4f to
7237609
Compare
Summary of ChangesHello @KaustubhPatange, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new feature to display real-time cost calculations for Claude API usage within the application. It provides users with transparent tracking of their API spending by integrating official Claude pricing, including support for tiered rates and various token types. The pricing data is automatically updated and bundled, ensuring accurate and up-to-date cost metrics across sessions. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Most of the LOC is from |
There was a problem hiding this comment.
Code Review
The pull request introduces a new feature for real-time cost calculation of Claude API usage. It includes a prebuild script to fetch pricing data from LiteLLM, updates package.json to include this script and bundle the pricing data, modifies src/main/types/chunks.ts to add costUsd to SessionMetrics, and integrates cost calculation into src/main/utils/jsonl.ts. Additionally, it updates UI components (AIChatGroup.tsx, ChatHistory.tsx, SessionContextHeader.tsx, TokenUsageDisplay.tsx) to display the calculated costs and adds a new utility file src/shared/utils/costFormatting.ts for formatting USD costs. Comprehensive unit tests for cost calculation have also been added in test/main/utils/costCalculation.test.ts.
Overall, the changes are well-structured and address the feature request effectively. The implementation of tiered pricing per message is a good design choice for accuracy. The new test file provides good coverage for the cost calculation logic, including edge cases and model lookup. The UI updates are consistent with the existing design.
📝 WalkthroughWalkthroughAdds build-time fetching and packaging of model pricing data, computes per-message tiered USD costs and aggregates them into session metrics, exposes cost formatting utilities and a CostBreakdown type, updates renderer components to display cost, and adds unit tests for pricing and formatting. Changes
Suggested labels
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/renderer/components/chat/SessionContextPanel/types.ts (1)
8-28:⚠️ Potential issue | 🟠 MajorRenderer should not depend on
@main/typesfor SessionMetrics.This crosses the renderer/main boundary and currently fails the boundaries rule. Move
SessionMetricsinto@shared/types(or re-export from there) and import it from the shared location.🛠️ Suggested direction
-import type { SessionMetrics } from '@main/types'; +import type { SessionMetrics } from '@shared/types/metrics';As per coding guidelines "src/shared/types/**/*.ts: Store shared types between main and renderer processes in
@shared/types/directory".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/chat/SessionContextPanel/types.ts` around lines 8 - 28, The renderer currently imports SessionMetrics from `@main/types` which violates process boundaries; move the SessionMetrics type into the shared types package (e.g., src/shared/types) or add a re-export from `@shared/types`, then update the import in SessionContextPanelProps to import SessionMetrics from '@shared/types' instead of '@main/types' (look for the SessionContextPanelProps interface and the existing import line for SessionMetrics to adjust).
🧹 Nitpick comments (3)
scripts/fetch-pricing-data.ts (1)
1-18: Consider camelCase filename for this utility script.Rename to
fetchPricingData.tsand update the prebuild script to align with file-naming conventions.As per coding guidelines "Use camelCase for utility file names".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/fetch-pricing-data.ts` around lines 1 - 18, The script file is using snake-case filename "fetch-pricing-data.ts"; rename it to camelCase "fetchPricingData.ts" and update any references (e.g., the executable shebang import entry, build scripts and the prebuild npm script that invokes this file) so the module path matches the new name; ensure imports/paths that compute __filename, __dirname, OUTPUT_PATH and LITELLM_PRICING_URL remain unchanged and the prebuild script in package.json points to "./scripts/fetchPricingData.ts" (or equivalent) after renaming.src/main/utils/jsonl.ts (1)
219-220: Consider moving imports to the top of the file.The
fsandpathimports are placed mid-file (after line 214). While this works in Node.js due to hoisting, placing imports at the top of the file is the standard convention and improves readability.♻️ Suggested change
Move these imports to the top of the file with other imports:
import { isCommandOutputContent, sanitizeDisplayContent } from '@shared/utils/contentSanitizer'; import { createLogger } from '@shared/utils/logger'; +import * as fs from 'fs'; +import * as path from 'path'; import * as readline from 'readline';Then remove lines 219-220.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/utils/jsonl.ts` around lines 219 - 220, The fs and path imports are located mid-file instead of with the other imports; move the two import statements for fs and path to the top of src/main/utils/jsonl.ts alongside the other imports and remove the duplicated lines at their current location (references: import identifiers "fs" and "path" in this file). Ensure any code that uses fs or path continues to work after relocation and run lint/type checks to confirm no import-order issues.test/main/utils/costCalculation.test.ts (1)
409-449: Add a noop comment to satisfy the linter.The ESLint rule
@typescript-eslint/no-empty-functionflags the empty arrow function. While this is intentional to suppress console output, adding a comment makes the intent explicit.🧹 Proposed fix
- const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {}); + const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => { + // Intentionally empty - suppress expected error output during test + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/utils/costCalculation.test.ts` around lines 409 - 449, The empty arrow function used in the console.error spy (vi.spyOn(console, 'error').mockImplementation(() => {})) triggers `@typescript-eslint/no-empty-function`; replace the empty implementation with a no-op that contains a comment (e.g., mockImplementation(() => { /* noop: suppress expected error */ })) so the linter is satisfied while preserving behavior in the test that calls freshCalculateMetrics and asserts metrics.costUsd and console.error usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/fetch-pricing-data.ts`:
- Around line 105-110: The catch block in scripts/fetch-pricing-data.ts
currently logs the error and exits without ensuring resources/pricing.json
exists; update that catch (error) handler to check for the presence of the
target file (resources/pricing.json) and, if absent, create a minimal fallback
JSON file (e.g., an empty or default pricing object) before calling
process.exit(0). Reference the catch block around process.exit(0) and use
filesystem utilities used elsewhere in the file (fs.existsSync/fs.writeFileSync
or their async equivalents) to atomically create the fallback file and log that
a fallback was written.
In
`@src/renderer/components/chat/SessionContextPanel/components/SessionContextHeader.tsx`:
- Line 23: Replace the renderer-side import of SessionMetrics from the backend
module with the shared types module: update the import of SessionMetrics in the
SessionContextHeader component and the SessionContextPanel types file to import
from `@shared/types` instead of `@main/types` so the renderer only depends on shared
types (symbol to change: SessionMetrics).
---
Outside diff comments:
In `@src/renderer/components/chat/SessionContextPanel/types.ts`:
- Around line 8-28: The renderer currently imports SessionMetrics from
`@main/types` which violates process boundaries; move the SessionMetrics type into
the shared types package (e.g., src/shared/types) or add a re-export from
`@shared/types`, then update the import in SessionContextPanelProps to import
SessionMetrics from '@shared/types' instead of '@main/types' (look for the
SessionContextPanelProps interface and the existing import line for
SessionMetrics to adjust).
---
Nitpick comments:
In `@scripts/fetch-pricing-data.ts`:
- Around line 1-18: The script file is using snake-case filename
"fetch-pricing-data.ts"; rename it to camelCase "fetchPricingData.ts" and update
any references (e.g., the executable shebang import entry, build scripts and the
prebuild npm script that invokes this file) so the module path matches the new
name; ensure imports/paths that compute __filename, __dirname, OUTPUT_PATH and
LITELLM_PRICING_URL remain unchanged and the prebuild script in package.json
points to "./scripts/fetchPricingData.ts" (or equivalent) after renaming.
In `@src/main/utils/jsonl.ts`:
- Around line 219-220: The fs and path imports are located mid-file instead of
with the other imports; move the two import statements for fs and path to the
top of src/main/utils/jsonl.ts alongside the other imports and remove the
duplicated lines at their current location (references: import identifiers "fs"
and "path" in this file). Ensure any code that uses fs or path continues to work
after relocation and run lint/type checks to confirm no import-order issues.
In `@test/main/utils/costCalculation.test.ts`:
- Around line 409-449: The empty arrow function used in the console.error spy
(vi.spyOn(console, 'error').mockImplementation(() => {})) triggers
`@typescript-eslint/no-empty-function`; replace the empty implementation with a
no-op that contains a comment (e.g., mockImplementation(() => { /* noop:
suppress expected error */ })) so the linter is satisfied while preserving
behavior in the test that calls freshCalculateMetrics and asserts
metrics.costUsd and console.error usage.
src/renderer/components/chat/SessionContextPanel/components/SessionContextHeader.tsx
Outdated
Show resolved
Hide resolved
92334e3 to
270e95b
Compare
270e95b to
5932009
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/main/utils/jsonl.ts (2)
268-270: Use the existing logger for consistency.The file already imports and uses
loggerfrom@shared/utils/logger(line 35). Usingconsole.errorhere is inconsistent with the rest of the file.♻️ Proposed fix
} catch (error) { - console.error('Failed to load pricing data:', error); + logger.error('Failed to load pricing data:', error); // Return empty object if pricing data can't be loaded pricingCache = {};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/utils/jsonl.ts` around lines 268 - 270, Replace the console.error call in the pricing-data load catch block with the shared logger: locate the try/catch that loads pricing data (the catch currently logs "Failed to load pricing data:" using console.error) and change it to use logger.error, passing the same message and the error object (e.g., logger.error('Failed to load pricing data:', error)); keep the existing behavior of returning an empty object on failure. Ensure the import-sourced logger (from `@shared/utils/logger`) is used consistently in that catch block.
219-220: Move imports to the top of the file.The
fsandpathimports are placed mid-file, which violates the import organization convention. As per coding guidelines, imports should be organized at the top of the file in order: external packages, path aliases, then relative imports.♻️ Proposed fix
Add these imports at the top of the file with other external packages (after line 12):
import * as fs from 'fs'; import * as path from 'path';Then remove lines 219-220.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/utils/jsonl.ts` around lines 219 - 220, The mid-file imports "fs" and "path" should be moved into the top import block with the other external packages: remove the duplicate imports currently declared as import * as fs from 'fs'; import * as path from 'path'; from their current location and add them to the top of the file alongside the other external imports so that all external modules (including fs and path) are grouped before path-alias and relative imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/utils/jsonl.ts`:
- Line 338: Remove the dead local variable modelName and any assignments to it:
delete the declaration "let modelName: string | undefined;" and remove the later
assignment block that sets modelName (the unused assignment around where the
JSONL parsing assigns model-related values). Ensure no other code depends on
modelName; if any logic was intended to use it, instead use the existing
variables already present in the function (e.g., the parsed model or source
fields) or implement that usage explicitly.
---
Nitpick comments:
In `@src/main/utils/jsonl.ts`:
- Around line 268-270: Replace the console.error call in the pricing-data load
catch block with the shared logger: locate the try/catch that loads pricing data
(the catch currently logs "Failed to load pricing data:" using console.error)
and change it to use logger.error, passing the same message and the error object
(e.g., logger.error('Failed to load pricing data:', error)); keep the existing
behavior of returning an empty object on failure. Ensure the import-sourced
logger (from `@shared/utils/logger`) is used consistently in that catch block.
- Around line 219-220: The mid-file imports "fs" and "path" should be moved into
the top import block with the other external packages: remove the duplicate
imports currently declared as import * as fs from 'fs'; import * as path from
'path'; from their current location and add them to the top of the file
alongside the other external imports so that all external modules (including fs
and path) are grouped before path-alias and relative imports.
|
Hey @KaustubhPatange — nice work on this! Just wanted to flag that we've already landed cost calculation in PR #60 (Session Analysis Report), which computes per-session cost using a hardcoded pricing table with fuzzy model matching, and includes richer analytical layers on top (cost per commit, cost per line changed, subagent cost share, severity-graded assessment badges, etc.). Your approach has some nice properties we didn't pursue — the LiteLLM-sourced pricing data and tiered 200k-token thresholds are more precise than our hardcoded table. If you're interested, it might be worth looking at PR #60 to see if there's a way to combine efforts: e.g., swapping our hardcoded pricing for your external data source while keeping the analytical report structure. Happy to coordinate on this! |
I think your PR is more diverse and includes a lot of feature under reports. The common bits from my PR which you can take a look after merge is the cost calculation part in jsonl.ts. I use cost calculation to show in UI rather than in report. So your feature can stay intact and can be worked parallely. I think once @matt1398 confirms this we should proceed to do what's best for the project. Edit: This also extends existing |
Now that our code has been merged, I'm going to take a first pass at unifying our approaches. My goal is to use your pricing models in the new sessions analytics screen. |
Sounds nice, let me know if I can help in any way. |
Overview
Adds real-time cost calculation and display for Claude API usage throughout the application. Users can now track their API spending across sessions with accurate per-token pricing including tiered rates and cache token pricing. Inspired from https://github.com/ryoppippi/ccusage (#54).
This also extends existing
calculateMetricsfunction where initially it used to setcostUsdto0maybe indicating that cost calculation feature was not implemented. This PR implements that missing functionality incalculateMetricsmethod.Pricing Data
The application automatically fetches the latest pricing from LiteLLM Github (https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json) during every build via a prebuild script (
script/fetch-pricing-data.ts), filtering the results to Claude models only (206 models, approximately 146KB), and stores the data in resources/pricing.json using the electron-vite resource pattern.The bundled pricing data supports offline usage and is loaded at runtime from the resources directory with in-memory caching for efficiency. In future we can apply techniques to fetch it during application start, but for now I kept it simple to reduce API calls and caching mechanism.
Screenshots
TokenUsageDisplay.tsxSessionContextHeader.tsxCloses #54
Summary by CodeRabbit
New Features
Tests