Skip to content

feat: add cost calculation metric#65

Merged
matt1398 merged 4 commits intomatt1398:mainfrom
KaustubhPatange:main
Feb 23, 2026
Merged

feat: add cost calculation metric#65
matt1398 merged 4 commits intomatt1398:mainfrom
KaustubhPatange:main

Conversation

@KaustubhPatange
Copy link
Contributor

@KaustubhPatange KaustubhPatange commented Feb 22, 2026

Overview

Adds real-time cost calculation and display for Claude API usage throughout the application. Users can now track their API spending across sessions with accurate per-token pricing including tiered rates and cache token pricing. Inspired from https://github.com/ryoppippi/ccusage (#54).

  • Automatic cost calculation from token usage using official Claude pricing
  • Tiered pricing support for models with 200k token thresholds (e.g., Claude 3.5 Sonnet)
  • Per-message calculation applies tiered rates correctly to individual messages, not aggregated totals
  • All token types supported: input, output, cache creation, cache read
  • Model-specific pricing with automatic model name lookup (case-insensitive)

This also extends existing calculateMetrics function where initially it used to set costUsd to 0 maybe indicating that cost calculation feature was not implemented. This PR implements that missing functionality in calculateMetrics method.


Pricing Data

The application automatically fetches the latest pricing from LiteLLM Github (https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json) during every build via a prebuild script (script/fetch-pricing-data.ts), filtering the results to Claude models only (206 models, approximately 146KB), and stores the data in resources/pricing.json using the electron-vite resource pattern.

The bundled pricing data supports offline usage and is loaded at runtime from the resources directory with in-memory caching for efficiency. In future we can apply techniques to fetch it during application start, but for now I kept it simple to reduce API calls and caching mechanism.

Screenshots

  • TokenUsageDisplay.tsx
Screenshot 2026-02-22 at 3 42 46 PM
  • SessionContextHeader.tsx
Screenshot 2026-02-22 at 3 43 05 PM

Closes #54

Summary by CodeRabbit

  • New Features

    • Real-time USD cost tracking shown with token usage and session cost breakdowns in chat and history.
    • Automatic fetching and packaging of model pricing data during build.
    • Tiered pricing support for high-volume usage and formatted cost display options.
  • Tests

    • Extensive unit tests for cost calculations and cost-formatting utilities.

@gemini-code-assist
Copy link

Summary of Changes

Hello @KaustubhPatange, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new feature to display real-time cost calculations for Claude API usage within the application. It provides users with transparent tracking of their API spending by integrating official Claude pricing, including support for tiered rates and various token types. The pricing data is automatically updated and bundled, ensuring accurate and up-to-date cost metrics across sessions.

Highlights

  • Automatic Cost Calculation: Implemented real-time cost calculation for Claude API usage, providing accurate per-token pricing across sessions.
  • Tiered Pricing Support: Added support for tiered pricing models, correctly applying different rates based on token thresholds (e.g., 200k tokens).
  • Per-Message Calculation: Ensured that tiered rates are applied correctly to individual messages rather than aggregated totals, reflecting actual API billing.
  • Comprehensive Token Type Support: Cost calculations now account for all token types: input, output, cache creation, and cache read tokens.
  • Model-Specific Pricing: Integrated model-specific pricing with automatic, case-insensitive model name lookup for accurate cost attribution.
  • Automated Pricing Data Management: Introduced a prebuild script to automatically fetch and bundle the latest Claude pricing data from LiteLLM, supporting offline usage and ensuring up-to-date rates.
Changelog
  • package.json
    • Added a 'prebuild' script to fetch pricing data.
    • Configured 'electron-builder' to include 'pricing.json' as an extra resource.
  • scripts/fetch-pricing-data.ts
    • New script created to fetch Claude model pricing data from LiteLLM's GitHub.
    • Filters pricing data to Claude models only and saves it to 'resources/pricing.json'.
  • src/main/types/chunks.ts
    • Extended the 'SessionMetrics' interface to include a 'costUsd' property.
  • src/main/utils/jsonl.ts
    • Implemented functions for loading pricing data from the resources directory.
    • Added logic for calculating tiered costs based on token thresholds.
    • Introduced a function to retrieve model-specific pricing with case-insensitive lookup.
    • Updated the 'calculateMetrics' function to incorporate per-message cost calculations.
  • src/renderer/components/chat/AIChatGroup.tsx
    • Modified to pass the calculated 'costUsd' to the 'TokenUsageDisplay' component.
  • src/renderer/components/chat/ChatHistory.tsx
    • Updated to pass 'sessionMetrics' to the 'SessionContextHeader' component.
  • src/renderer/components/chat/SessionContextPanel/components/SessionContextHeader.tsx
    • Enhanced to display the total session cost using the new 'sessionMetrics' prop.
  • src/renderer/components/chat/SessionContextPanel/index.tsx
    • Updated to pass 'sessionMetrics' to its child components.
  • src/renderer/components/chat/SessionContextPanel/types.ts
    • Added 'sessionMetrics' as an optional property to the 'SessionContextPanelProps' interface.
  • src/renderer/components/common/TokenUsageDisplay.tsx
    • Modified to accept and display an optional 'costUsd' prop within the token usage details.
  • src/shared/types/api.ts
    • Defined a new 'CostBreakdown' interface for detailed cost information.
  • src/shared/utils/costFormatting.ts
    • New utility file created to provide functions for formatting USD costs with appropriate precision.
  • test/main/utils/costCalculation.test.ts
    • Added comprehensive unit tests for the new cost calculation logic.
    • Tests cover basic calculations, tiered pricing, multiple messages, edge cases, and model lookup.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@KaustubhPatange
Copy link
Contributor Author

KaustubhPatange commented Feb 22, 2026

Most of the LOC is from resources/pricing.json which is a one-time addition to electron resources. The actual code changes are pretty easy to verify.

@coderabbitai coderabbitai bot added the feature request New feature or request label Feb 22, 2026
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a new feature for real-time cost calculation of Claude API usage. It includes a prebuild script to fetch pricing data from LiteLLM, updates package.json to include this script and bundle the pricing data, modifies src/main/types/chunks.ts to add costUsd to SessionMetrics, and integrates cost calculation into src/main/utils/jsonl.ts. Additionally, it updates UI components (AIChatGroup.tsx, ChatHistory.tsx, SessionContextHeader.tsx, TokenUsageDisplay.tsx) to display the calculated costs and adds a new utility file src/shared/utils/costFormatting.ts for formatting USD costs. Comprehensive unit tests for cost calculation have also been added in test/main/utils/costCalculation.test.ts.

Overall, the changes are well-structured and address the feature request effectively. The implementation of tiered pricing per message is a good design choice for accuracy. The new test file provides good coverage for the cost calculation logic, including edge cases and model lookup. The UI updates are consistent with the existing design.

@KaustubhPatange KaustubhPatange marked this pull request as draft February 22, 2026 10:18
@coderabbitai
Copy link

coderabbitai bot commented Feb 22, 2026

📝 Walkthrough

Walkthrough

Adds build-time fetching and packaging of model pricing data, computes per-message tiered USD costs and aggregates them into session metrics, exposes cost formatting utilities and a CostBreakdown type, updates renderer components to display cost, and adds unit tests for pricing and formatting.

Changes

Cohort / File(s) Summary
Build config & pricing fetch
package.json, scripts/fetch-pricing-data.ts
Added prebuild script to fetch LiteLLM pricing and configured packaging of resources/pricing.json via build.extraResources. The fetch script filters Claude models, writes resources/pricing.json, logs size, and fails safely (exits 0) on errors.
Core metrics & cost calculation
src/main/types/chunks.ts, src/main/utils/jsonl.ts
Added costUsd to EMPTY_METRICS. Implemented pricing loader (dev vs prod paths), tiered pricing (200k token threshold, per-message application), model lookup heuristics, per-message CostBreakdown computation (input/output/cache), and accumulation of costUsd into session metrics.
Shared types & formatting
src/shared/types/api.ts, src/shared/utils/costFormatting.ts
Introduced exported CostBreakdown interface and new formatting utilities formatCostUsd and formatCostCompact with adaptive decimal precision rules.
Renderer: token & session UI
src/renderer/components/common/TokenUsageDisplay.tsx, src/renderer/components/chat/AIChatGroup.tsx, src/renderer/components/chat/ChatHistory.tsx, src/renderer/components/chat/SessionContextPanel/...
Added optional costUsd prop to TokenUsageDisplay and propagated session costUsd into token UI. Extended SessionContextPanel and SessionContextHeader public props to accept sessionMetrics?: SessionMetrics and render a Session Metrics Breakdown showing formatted session cost when present.
Tests
test/main/utils/costCalculation.test.ts, test/shared/utils/costFormatting.test.ts
Added comprehensive tests for cost calculation (tiering, per-message/aggregation, model lookup, error fallback) and for cost formatting (precision and rounding edge cases).

Suggested labels

dependencies

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Linked Issues check ✅ Passed The PR successfully implements all primary objectives from issue #54: cost calculation from token usage, session-level cost display in the Visible Context window, and per-conversation cost inspection with phase-level breakdown.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the stated objectives. The PR focuses on cost calculation infrastructure, pricing data integration, and UI displays for cost metrics without introducing unrelated functionality.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/renderer/components/chat/SessionContextPanel/types.ts (1)

8-28: ⚠️ Potential issue | 🟠 Major

Renderer should not depend on @main/types for SessionMetrics.

This crosses the renderer/main boundary and currently fails the boundaries rule. Move SessionMetrics into @shared/types (or re-export from there) and import it from the shared location.

🛠️ Suggested direction
-import type { SessionMetrics } from '@main/types';
+import type { SessionMetrics } from '@shared/types/metrics';

As per coding guidelines "src/shared/types/**/*.ts: Store shared types between main and renderer processes in @shared/types/ directory".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/components/chat/SessionContextPanel/types.ts` around lines 8 -
28, The renderer currently imports SessionMetrics from `@main/types` which
violates process boundaries; move the SessionMetrics type into the shared types
package (e.g., src/shared/types) or add a re-export from `@shared/types`, then
update the import in SessionContextPanelProps to import SessionMetrics from
'@shared/types' instead of '@main/types' (look for the SessionContextPanelProps
interface and the existing import line for SessionMetrics to adjust).
🧹 Nitpick comments (3)
scripts/fetch-pricing-data.ts (1)

1-18: Consider camelCase filename for this utility script.

Rename to fetchPricingData.ts and update the prebuild script to align with file-naming conventions.

As per coding guidelines "Use camelCase for utility file names".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/fetch-pricing-data.ts` around lines 1 - 18, The script file is using
snake-case filename "fetch-pricing-data.ts"; rename it to camelCase
"fetchPricingData.ts" and update any references (e.g., the executable shebang
import entry, build scripts and the prebuild npm script that invokes this file)
so the module path matches the new name; ensure imports/paths that compute
__filename, __dirname, OUTPUT_PATH and LITELLM_PRICING_URL remain unchanged and
the prebuild script in package.json points to "./scripts/fetchPricingData.ts"
(or equivalent) after renaming.
src/main/utils/jsonl.ts (1)

219-220: Consider moving imports to the top of the file.

The fs and path imports are placed mid-file (after line 214). While this works in Node.js due to hoisting, placing imports at the top of the file is the standard convention and improves readability.

♻️ Suggested change

Move these imports to the top of the file with other imports:

 import { isCommandOutputContent, sanitizeDisplayContent } from '@shared/utils/contentSanitizer';
 import { createLogger } from '@shared/utils/logger';
+import * as fs from 'fs';
+import * as path from 'path';
 import * as readline from 'readline';

Then remove lines 219-220.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/utils/jsonl.ts` around lines 219 - 220, The fs and path imports are
located mid-file instead of with the other imports; move the two import
statements for fs and path to the top of src/main/utils/jsonl.ts alongside the
other imports and remove the duplicated lines at their current location
(references: import identifiers "fs" and "path" in this file). Ensure any code
that uses fs or path continues to work after relocation and run lint/type checks
to confirm no import-order issues.
test/main/utils/costCalculation.test.ts (1)

409-449: Add a noop comment to satisfy the linter.

The ESLint rule @typescript-eslint/no-empty-function flags the empty arrow function. While this is intentional to suppress console output, adding a comment makes the intent explicit.

🧹 Proposed fix
-      const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
+      const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {
+        // Intentionally empty - suppress expected error output during test
+      });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/main/utils/costCalculation.test.ts` around lines 409 - 449, The empty
arrow function used in the console.error spy (vi.spyOn(console,
'error').mockImplementation(() => {})) triggers
`@typescript-eslint/no-empty-function`; replace the empty implementation with a
no-op that contains a comment (e.g., mockImplementation(() => { /* noop:
suppress expected error */ })) so the linter is satisfied while preserving
behavior in the test that calls freshCalculateMetrics and asserts
metrics.costUsd and console.error usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/fetch-pricing-data.ts`:
- Around line 105-110: The catch block in scripts/fetch-pricing-data.ts
currently logs the error and exits without ensuring resources/pricing.json
exists; update that catch (error) handler to check for the presence of the
target file (resources/pricing.json) and, if absent, create a minimal fallback
JSON file (e.g., an empty or default pricing object) before calling
process.exit(0). Reference the catch block around process.exit(0) and use
filesystem utilities used elsewhere in the file (fs.existsSync/fs.writeFileSync
or their async equivalents) to atomically create the fallback file and log that
a fallback was written.

In
`@src/renderer/components/chat/SessionContextPanel/components/SessionContextHeader.tsx`:
- Line 23: Replace the renderer-side import of SessionMetrics from the backend
module with the shared types module: update the import of SessionMetrics in the
SessionContextHeader component and the SessionContextPanel types file to import
from `@shared/types` instead of `@main/types` so the renderer only depends on shared
types (symbol to change: SessionMetrics).

---

Outside diff comments:
In `@src/renderer/components/chat/SessionContextPanel/types.ts`:
- Around line 8-28: The renderer currently imports SessionMetrics from
`@main/types` which violates process boundaries; move the SessionMetrics type into
the shared types package (e.g., src/shared/types) or add a re-export from
`@shared/types`, then update the import in SessionContextPanelProps to import
SessionMetrics from '@shared/types' instead of '@main/types' (look for the
SessionContextPanelProps interface and the existing import line for
SessionMetrics to adjust).

---

Nitpick comments:
In `@scripts/fetch-pricing-data.ts`:
- Around line 1-18: The script file is using snake-case filename
"fetch-pricing-data.ts"; rename it to camelCase "fetchPricingData.ts" and update
any references (e.g., the executable shebang import entry, build scripts and the
prebuild npm script that invokes this file) so the module path matches the new
name; ensure imports/paths that compute __filename, __dirname, OUTPUT_PATH and
LITELLM_PRICING_URL remain unchanged and the prebuild script in package.json
points to "./scripts/fetchPricingData.ts" (or equivalent) after renaming.

In `@src/main/utils/jsonl.ts`:
- Around line 219-220: The fs and path imports are located mid-file instead of
with the other imports; move the two import statements for fs and path to the
top of src/main/utils/jsonl.ts alongside the other imports and remove the
duplicated lines at their current location (references: import identifiers "fs"
and "path" in this file). Ensure any code that uses fs or path continues to work
after relocation and run lint/type checks to confirm no import-order issues.

In `@test/main/utils/costCalculation.test.ts`:
- Around line 409-449: The empty arrow function used in the console.error spy
(vi.spyOn(console, 'error').mockImplementation(() => {})) triggers
`@typescript-eslint/no-empty-function`; replace the empty implementation with a
no-op that contains a comment (e.g., mockImplementation(() => { /* noop:
suppress expected error */ })) so the linter is satisfied while preserving
behavior in the test that calls freshCalculateMetrics and asserts
metrics.costUsd and console.error usage.

@KaustubhPatange KaustubhPatange marked this pull request as ready for review February 22, 2026 10:36
@coderabbitai coderabbitai bot added the dependencies Pull requests that update a dependency file label Feb 22, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/main/utils/jsonl.ts (2)

268-270: Use the existing logger for consistency.

The file already imports and uses logger from @shared/utils/logger (line 35). Using console.error here is inconsistent with the rest of the file.

♻️ Proposed fix
   } catch (error) {
-    console.error('Failed to load pricing data:', error);
+    logger.error('Failed to load pricing data:', error);
     // Return empty object if pricing data can't be loaded
     pricingCache = {};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/utils/jsonl.ts` around lines 268 - 270, Replace the console.error
call in the pricing-data load catch block with the shared logger: locate the
try/catch that loads pricing data (the catch currently logs "Failed to load
pricing data:" using console.error) and change it to use logger.error, passing
the same message and the error object (e.g., logger.error('Failed to load
pricing data:', error)); keep the existing behavior of returning an empty object
on failure. Ensure the import-sourced logger (from `@shared/utils/logger`) is used
consistently in that catch block.

219-220: Move imports to the top of the file.

The fs and path imports are placed mid-file, which violates the import organization convention. As per coding guidelines, imports should be organized at the top of the file in order: external packages, path aliases, then relative imports.

♻️ Proposed fix

Add these imports at the top of the file with other external packages (after line 12):

import * as fs from 'fs';
import * as path from 'path';

Then remove lines 219-220.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/utils/jsonl.ts` around lines 219 - 220, The mid-file imports "fs"
and "path" should be moved into the top import block with the other external
packages: remove the duplicate imports currently declared as import * as fs from
'fs'; import * as path from 'path'; from their current location and add them to
the top of the file alongside the other external imports so that all external
modules (including fs and path) are grouped before path-alias and relative
imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main/utils/jsonl.ts`:
- Line 338: Remove the dead local variable modelName and any assignments to it:
delete the declaration "let modelName: string | undefined;" and remove the later
assignment block that sets modelName (the unused assignment around where the
JSONL parsing assigns model-related values). Ensure no other code depends on
modelName; if any logic was intended to use it, instead use the existing
variables already present in the function (e.g., the parsed model or source
fields) or implement that usage explicitly.

---

Nitpick comments:
In `@src/main/utils/jsonl.ts`:
- Around line 268-270: Replace the console.error call in the pricing-data load
catch block with the shared logger: locate the try/catch that loads pricing data
(the catch currently logs "Failed to load pricing data:" using console.error)
and change it to use logger.error, passing the same message and the error object
(e.g., logger.error('Failed to load pricing data:', error)); keep the existing
behavior of returning an empty object on failure. Ensure the import-sourced
logger (from `@shared/utils/logger`) is used consistently in that catch block.
- Around line 219-220: The mid-file imports "fs" and "path" should be moved into
the top import block with the other external packages: remove the duplicate
imports currently declared as import * as fs from 'fs'; import * as path from
'path'; from their current location and add them to the top of the file
alongside the other external imports so that all external modules (including fs
and path) are grouped before path-alias and relative imports.

@holstein13
Copy link
Contributor

Hey @KaustubhPatange — nice work on this! Just wanted to flag that we've already landed cost calculation in PR #60 (Session Analysis Report), which computes per-session cost using a hardcoded pricing table with fuzzy model matching, and includes richer analytical layers on top (cost per commit, cost per line changed, subagent cost share, severity-graded assessment badges, etc.).

Your approach has some nice properties we didn't pursue — the LiteLLM-sourced pricing data and tiered 200k-token thresholds are more precise than our hardcoded table. If you're interested, it might be worth looking at PR #60 to see if there's a way to combine efforts: e.g., swapping our hardcoded pricing for your external data source while keeping the analytical report structure.

Happy to coordinate on this!

@KaustubhPatange
Copy link
Contributor Author

KaustubhPatange commented Feb 22, 2026

Hey @KaustubhPatange — nice work on this! Just wanted to flag that we've already landed cost calculation in PR #60 (Session Analysis Report), which computes per-session cost using a hardcoded pricing table with fuzzy model matching, and includes richer analytical layers on top (cost per commit, cost per line changed, subagent cost share, severity-graded assessment badges, etc.).

Your approach has some nice properties we didn't pursue — the LiteLLM-sourced pricing data and tiered 200k-token thresholds are more precise than our hardcoded table. If you're interested, it might be worth looking at PR #60 to see if there's a way to combine efforts: e.g., swapping our hardcoded pricing for your external data source while keeping the analytical report structure.

Happy to coordinate on this!

I think your PR is more diverse and includes a lot of feature under reports. The common bits from my PR which you can take a look after merge is the cost calculation part in jsonl.ts. I use cost calculation to show in UI rather than in report. So your feature can stay intact and can be worked parallely.

I think once @matt1398 confirms this we should proceed to do what's best for the project.

Edit: This also extends existing calculateMetrics function where initially it used to set costUsd to 0 maybe indicating that cost calculation feature was not implemented. This PR implements that missing functionality in calculateMetrics method.

@matt1398 matt1398 merged commit 58c979f into matt1398:main Feb 23, 2026
4 checks passed
@holstein13
Copy link
Contributor

Most of the LOC is from resources/pricing.json which is a one-time addition to electron resources. The actual code changes are pretty easy to verify.

Now that our code has been merged, I'm going to take a first pass at unifying our approaches. My goal is to use your pricing models in the new sessions analytics screen.

@KaustubhPatange
Copy link
Contributor Author

Most of the LOC is from resources/pricing.json which is a one-time addition to electron resources. The actual code changes are pretty easy to verify.

Now that our code has been merged, I'm going to take a first pass at unifying our approaches. My goal is to use your pricing models in the new sessions analytics screen.

Sounds nice, let me know if I can help in any way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file feature request New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature: Add cost ($) to token usage

3 participants