Skip to content

Conversation

@minimAluminiumalism
Copy link
Contributor

@minimAluminiumalism minimAluminiumalism commented Jul 31, 2025

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Fix #2898 and related to #3142 (comment) and #3168. Ref https://platform.openai.com/docs/api-reference/chat/object#chat/object-usage

I've removed the enrich_token_usage parameter from OpenAI and also removed it from Groq and Sagemaker, as it never had any practical effect. I preserved it in Anthropic(the same applies to Bedrock) because it controls the calculation logic for anthropic.Anthropic().messages.count_tokens.

Removed:

  • OpenAI
  • Groq
  • Sagemaker

Reserve:

  • Anthropic
  • Bedrock

Important

Remove enrich_token_usage parameter from OpenAI, Groq, and Sagemaker instrumentors, simplifying token usage calculation.

  • Behavior:
    • Removed enrich_token_usage parameter from OpenAI, Groq, and SageMaker instrumentors.
    • Retained enrich_token_usage for Anthropic and Bedrock due to specific logic requirements.
    • Simplified token usage calculation to rely solely on API response.
  • Tests:
    • Updated tests in test_chat.py and test_completions.py to reflect removal of enrich_token_usage.
    • Adjusted assertions in test_chat_streaming and test_completion_streaming to check token usage only if provided by API.
  • Instrumentation:
    • Updated init_openai_instrumentor and init_sagemaker_instrumentor in tracing.py to remove enrich_token_usage parameter.

This description was created by Ellipsis for d77d742. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

Summary by CodeRabbit

  • Refactor

    • Removed support for the enrich_token_usage configuration option across OpenAI, SageMaker, and Groq instrumentations.
    • Token usage metrics are now only collected if provided directly by the API response; local token counting and related fallbacks have been eliminated.
    • Updated tests and fixtures to reflect the removal of enrich_token_usage and related logic.
  • Tests

    • Adjusted test cases and fixtures to align with the new token usage handling and updated API endpoints.
    • Improved test reliability by conditionally asserting token usage attributes only when present.

@coderabbitai
Copy link

coderabbitai bot commented Jul 31, 2025

Warning

Rate limit exceeded

@minimAluminiumalism has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 50 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 53c3ed8 and 45462ea.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-openai/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (1)
  • packages/opentelemetry-instrumentation-openai/pyproject.toml (1 hunks)

Walkthrough

This change removes the enrich_token_usage parameter and all related logic from the OpenTelemetry instrumentations for Groq, OpenAI, and SageMaker. It eliminates local token counting utilities, configuration flags, and test code referencing this functionality. Tests and fixtures are updated to reflect these removals and to use mock clients and updated response data.

Changes

Cohort / File(s) Change Summary
Groq Instrumentor: Remove enrich_token_usage
packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py, packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/config.py, packages/opentelemetry-instrumentation-groq/tests/traces/conftest.py
Removed the enrich_token_usage parameter from the GroqInstrumentor constructor and its configuration; updated all test fixtures to instantiate the instrumentor without this argument.
OpenAI Instrumentor: Remove enrich_token_usage and token counting
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/__init__.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/completion_wrappers.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/config.py, packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
Removed the enrich_token_usage parameter and config from the instrumentor; eliminated local token counting logic and related imports; functions for token usage now rely solely on API response fields; removed the utility function and config flag for conditional token usage enrichment.
OpenAI Instrumentor Tests: Update for enrich_token_usage removal and mock client
packages/opentelemetry-instrumentation-openai/tests/conftest.py, packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_streaming.yaml, packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py, packages/opentelemetry-instrumentation-openai/tests/traces/test_completions.py, packages/opentelemetry-instrumentation-openai/tests/metrics/test_openai_metrics.py, packages/opentelemetry-instrumentation-openai/tests/traces/test_azure.py
Removed all references to enrich_token_usage in fixtures; introduced a mock OpenAI client and a Deepseek client; updated cassette and tests to use local endpoints and conditional assertions for token usage; removed environment variable manipulation for token usage enrichment; updated tests to handle optional token usage attributes and use alternative API endpoints.
SageMaker Instrumentor: Remove enrich_token_usage
packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/__init__.py, packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/config.py, packages/opentelemetry-instrumentation-sagemaker/tests/conftest.py
Removed the enrich_token_usage parameter and related config from the instrumentor and all test fixtures.
SDK Tracing: Remove enrich_token_usage from instrumentor init
packages/traceloop-sdk/traceloop/sdk/tracing/tracing.py
Removed the enrich_token_usage argument from OpenAI and SageMaker instrumentor initialization in SDK tracing setup.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Instrumentor
    participant API

    User->>Instrumentor: Initialize (no enrich_token_usage)
    User->>Instrumentor: Send request to API
    Instrumentor->>API: Make API call
    API-->>Instrumentor: Respond with usage data (if available)
    Instrumentor->>Instrumentor: Extract token usage from API response (no local counting)
    Instrumentor-->>User: Return response and metrics
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Assessment against linked issues

Objective Addressed Explanation
Remove reliance on enrich_token_usage flag to record token usage metrics if response has usage info (#2898)
Use token usage information directly from API response to record metrics, removing local token counting fallback (#2898)
Support recording token usage metrics for custom or incompatible models without tiktoken (#2898)

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes detected.

Suggested reviewers

  • nirga

Poem

A carrot for code, a hop for a test,
The token counts now put to rest.
No more flags or counting here—
Just API truth, crisp and clear!
With mock clients and fixtures anew,
This rabbit says: "Good job, crew!" 🥕🐇

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to d77d742 in 1 minute and 39 seconds. Click for details.
  • Reviewed 1781 lines of code in 18 files
  • Skipped 1 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/__init__.py:172
  • Draft comment:
    Removal of the unused parameter is clean; the init now only accepts exception_logger and use_legacy_attributes.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it simply states what the code change does without providing any actionable feedback or suggestions. It doesn't ask for confirmation or suggest improvements, nor does it point out any potential issues. Therefore, it should be removed according to the rules.
2. packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/config.py:1
  • Draft comment:
    Config now only contains exception_logger and use_legacy_attributes. Check that any external documentation is updated accordingly.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is asking the PR author to ensure that external documentation is updated, which falls under the rule of not asking the author to update documentation or descriptions. It doesn't provide a specific code suggestion or highlight a potential issue with the code itself.
3. packages/opentelemetry-instrumentation-sagemaker/tests/conftest.py:50
  • Draft comment:
    Updated tests properly remove the deprecated parameter in instrument_legacy and instrument_with_* fixtures. This ensures consistency with the removed parameter.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, explaining what was done in the code without providing any actionable feedback or suggestions. It doesn't ask for any specific changes or confirm any intentions, which violates the rules.
4. packages/traceloop-sdk/traceloop/sdk/tracing/tracing.py:841
  • Draft comment:
    The init_sagemaker_instrumentor function now correctly instantiates SageMakerInstrumentor without the enrich_token_usage parameter. Ensure end-to-end consistency for users.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/traceloop/sdk/tracing/tracing.py:493
  • Draft comment:
    Ensure that any internal or external documentation references to 'enrich_token_usage' are updated to reflect its removal.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_WhwBPN3iFkApPecc

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_streaming.yaml (4)

17-19: Filter volatile headers to keep the cassette stable

host and especially traceparent change between test runs. Consider adding them to vcr.use_cassette(..., filter_headers=[...]) so the values are redacted. This prevents needless cassette churn and avoids leaking trace-ids into the repo.


44-239: Trim the gigantic streaming body

The cassette stores >200 individual SSE chunks that are irrelevant to the assertions. Keeping only the first few chunks plus the final [DONE] token drastically reduces cassette size and diff noise while still exercising streaming logic.


248-248: Drop the Date header

Date varies on every recording and provides no value for assertions. Redacting it (same technique as other volatile headers) removes unnecessary diffs.


241-252: Remove internal server header

X-Powered-By: Express discloses implementation details of the local mock server and provides no testing value. Safe to strip.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 168236a and d77d742.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-openai/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (17)
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py (0 hunks)
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/config.py (0 hunks)
  • packages/opentelemetry-instrumentation-groq/tests/traces/conftest.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/__init__.py (0 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (0 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/completion_wrappers.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/config.py (0 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (0 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/conftest.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_streaming.yaml (2 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (8 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_completions.py (2 hunks)
  • packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/__init__.py (0 hunks)
  • packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/config.py (0 hunks)
  • packages/opentelemetry-instrumentation-sagemaker/tests/conftest.py (3 hunks)
  • packages/traceloop-sdk/traceloop/sdk/tracing/tracing.py (0 hunks)
💤 Files with no reviewable changes (9)
  • packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/config.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/config.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/init.py
  • packages/traceloop-sdk/traceloop/sdk/tracing/tracing.py
  • packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/init.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/config.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/init.py
  • packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/init.py
🧰 Additional context used
🧬 Code Graph Analysis (4)
packages/opentelemetry-instrumentation-groq/tests/traces/conftest.py (1)
packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/__init__.py (1)
  • GroqInstrumentor (383-483)
packages/opentelemetry-instrumentation-sagemaker/tests/conftest.py (1)
packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/__init__.py (1)
  • SageMakerInstrumentor (169-213)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/completion_wrappers.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
  • _set_span_stream_usage (251-272)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (3)
packages/opentelemetry-instrumentation-openai/tests/traces/test_azure.py (1)
  • test_chat_streaming (410-463)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
  • instrument_legacy (128-143)
  • mock_openai_client (47-51)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1)
  • SpanAttributes (64-257)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Test Packages (3.12)
  • GitHub Check: Test Packages (3.9)
  • GitHub Check: Test Packages (3.11)
  • GitHub Check: Test Packages (3.10)
  • GitHub Check: Build Packages (3.11)
  • GitHub Check: Lint
🔇 Additional comments (22)
packages/opentelemetry-instrumentation-sagemaker/tests/conftest.py (3)

52-52: LGTM! Constructor call updated correctly.

The removal of the enrich_token_usage parameter aligns with the updated SageMakerInstrumentor constructor signature, which only accepts exception_logger and use_legacy_attributes parameters.


67-67: LGTM! Constructor call updated correctly.

The removal of the enrich_token_usage parameter aligns with the updated SageMakerInstrumentor constructor signature. The use_legacy_attributes=False parameter is correctly maintained for the non-legacy instrumentation test fixture.


85-85: LGTM! Constructor call updated correctly.

The removal of the enrich_token_usage parameter aligns with the updated SageMakerInstrumentor constructor signature. The use_legacy_attributes=False parameter is correctly maintained for the non-legacy instrumentation test fixture.

packages/opentelemetry-instrumentation-groq/tests/traces/conftest.py (3)

84-94: LGTM! Parameter removal aligns with updated constructor.

The removal of enrich_token_usage=True from the GroqInstrumentor() instantiation correctly reflects the updated constructor signature where this parameter no longer exists.


97-115: LGTM! Consistent parameter removal.

The instrument_with_content fixture has been consistently updated to remove the enrich_token_usage parameter while preserving the use_legacy_attributes=False configuration and event logger setup.


118-136: LGTM! Parameter cleanup completed.

The instrument_with_no_content fixture has been properly updated to remove the enrich_token_usage parameter, maintaining consistency with the other test fixtures and the updated GroqInstrumentor constructor.

packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)

46-51: LGTM! Good addition for test isolation.

The new mock_openai_client fixture provides a consistent way to test with a local mock server, avoiding real API calls and improving test reliability.


132-135: Parameter removal correctly implemented.

The removal of enrich_token_usage from the OpenAI instrumentor constructor aligns with the PR objective to simplify token calculation logic.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py (1)

533-540: Simplified token usage logic correctly implemented.

The change to rely solely on API-provided usage data simplifies the logic and removes the complexity of local token counting. The function now has a clear single source of truth for token metrics.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/completion_wrappers.py (2)

232-244: Consistent simplification of token usage logic.

The function now uses only API-provided usage data, maintaining consistency with the chat wrapper implementation and the overall PR objective to simplify token calculation.


255-257: Good addition to preserve streaming usage data.

The logic to capture usage information from stream chunks ensures token usage data is available for the simplified token usage calculation when streaming responses are used.

packages/opentelemetry-instrumentation-openai/tests/traces/test_completions.py (3)

348-394: Test updates appropriately reflect the instrumentation changes.

The use of mock_openai_client, updated API base URL expectations, and conditional token usage assertions correctly align with the simplified token calculation logic. The removal of environment variable manipulation simplifies the test setup.


419-430: Robust conditional token usage testing.

The conditional assertions for token usage provide good test coverage while accounting for the fact that token usage data may not always be available from the API response. This makes the tests more resilient.


481-493: Consistent test pattern maintained across streaming tests.

The conditional token usage assertions are consistently applied across all streaming tests, with clear comments explaining the dependency on API support. This ensures robust test coverage under the new token usage approach.

packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_streaming.yaml (1)

13-13: Verify the recorded content-length

The body has clearly changed since the previous recording, yet content-length is still hard-coded to 123. If VCR is configured to validate request headers this will break replay. Please re-record or delete the header.

packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (7)

584-584: LGTM - Switch to mock client for testing

The change to use mock_openai_client instead of openai_client is appropriate for this test, providing better isolation and consistency in the test environment.


609-611: LGTM - Updated API base URL for mock client

The assertion correctly reflects the mock client's localhost base URL configuration, maintaining test accuracy.


615-616: LGTM - More robust event count assertion

The updated comment and assertion appropriately account for potential differences in mock server behavior while still validating the core functionality (presence of events).


626-628: LGTM - Added clarifying comment for token usage validation

The comment helps explain the validation logic for token usage attributes, improving code readability.


630-632: LGTM - Updated response ID for mock server

The response ID correctly reflects the mock server's response data, maintaining test accuracy.


678-680: LGTM - Conditional token usage validation aligns with enrich_token_usage removal

The conditional token usage assertions are appropriate given the removal of the enrich_token_usage parameter. The instrumentation now relies solely on token usage data provided by the API response, which may not always be present.

This represents a behavioral change where token usage attributes are no longer guaranteed to be available in all scenarios, but the validation logic correctly handles both cases.

Also applies to: 741-741, 749-750, 813-814, 865-866, 934-935


741-741: LGTM - Clarifying comment about optional token usage

The comment appropriately explains that token usage attributes are optional and API-dependent, helping future maintainers understand the conditional assertion logic.

Copy link
Contributor

@feng-95 feng-95 Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you remove the tiktoken requirement from pyproject.toml?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've only removed the tiktoken dependency from OpenAI.

Since I haven't modified the other instrumentations, their tiktoken dependencies have been kept to avoid other potential issues.

Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nirga nirga merged commit 8176494 into traceloop:main Jul 31, 2025
10 checks passed
nina-kollman pushed a commit that referenced this pull request Aug 11, 2025
…token calculation (#3205)

Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🐛 Bug Report: OPENAI Instrumentation won't record token usage metrics on auto-instrumentation or without tiktoken

3 participants