Skip to content

Conversation

@nirga
Copy link
Member

@nirga nirga commented Aug 15, 2025

Summary

  • Migrates Google Generative AI instrumentation from deprecated google-generativeai to new googleapis/python-genai library
  • Adds backward compatibility support for both libraries
  • Fixes several bugs in the existing instrumentation code
  • Fixes 🐛 Bug Report: Migrate to googleapis/python-genai #3281

Changes Made

  • Backward Compatibility: Added graceful import handling for both old and new libraries with fallback support
  • Library Detection: Enhanced instrumentor to automatically detect available library and use appropriate methods
  • Bug Fixes:
    • Fixed import issues with GenerateContentResponse type
    • Removed duplicate WRAPPED_METHODS definitions
    • Fixed _instrument method to use self._wrapped_methods() instead of hardcoded list
  • Test Updates: Updated test configuration to support both client libraries with preference for new google-genai
  • New Tests: Added comprehensive compatibility tests to verify both libraries work correctly

Test Results

  • ✅ All 7 tests pass
  • ✅ No linting issues
  • ✅ Both libraries detected and instrumented correctly

Migration Details

The instrumentation now:

  1. Prefers the new google-genai library when available
  2. Falls back to google-generativeai for backward compatibility
  3. Automatically detects which library is installed and instruments accordingly
  4. Maintains the same API surface for users

🤖 Generated with Claude Code


Important

Migrates Google Generative AI instrumentation to googleapis/python-genai, ensuring backward compatibility and fixing bugs, with updated and new tests for both libraries.

  • Migration:
    • Migrates from google-generativeai to googleapis/python-genai.
    • Prefers google-genai but falls back to google-generativeai if needed.
    • Updates imports in __init__.py and event_emitter.py.
  • Backward Compatibility:
    • Adds import handling for both libraries.
    • Uses _wrapped_methods() for dynamic method selection in __init__.py.
  • Bug Fixes:
    • Fixes import issues with GenerateContentResponse.
    • Removes duplicate WRAPPED_METHODS definitions.
  • Testing:
    • Updates tests in conftest.py and adds test_new_library_instrumentation.py.
    • Ensures tests support both libraries and adds compatibility tests.
  • Misc:
    • Removes google-generativeai from pyproject.toml dependencies.

This description was created by Ellipsis for 538d9f8. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • New Features
    • Added support for the new Google GenAI client API, covering both sync and async model methods.
  • Refactor
    • Simplified instrumentation by removing legacy fallbacks and unifying wrapped method resolution.
  • Chores
    • Updated dependencies to require google-genai (>= 1.0.0) and removed the legacy package from extras and tests.
  • Tests
    • Added tests validating wrapping behavior, dependency requirements, and compatibility with the new client.
  • Documentation
    • Updated import paths and references to align with the new Google GenAI package naming.

…n-genai

- Add backward compatibility support for both google-generativeai and google-genai libraries
- Fix import issues with graceful fallback for GenerateContentResponse
- Remove duplicate WRAPPED_METHODS definitions
- Fix _instrument method to use self._wrapped_methods() instead of hardcoded list
- Update test configuration to support both client libraries with preference for new google-genai
- Add comprehensive compatibility tests to verify both libraries work correctly
- Closes #3281

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Aug 15, 2025

Important

Review skipped

Review was skipped due to path filters

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-google-generativeai/poetry.lock is excluded by !**/*.lock

CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Migration of the Google Generative AI instrumentation from google-generativeai to google-genai: updated imports, simplified wrapped methods (removed legacy fallbacks), fixed instrumentation dependency to google-genai >= 1.0.0, adjusted tests to the new Client API, and updated pyproject extras and test dependencies.

Changes

Cohort / File(s) Summary
Instrumentation core migration
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py, .../event_emitter.py
Switched imports to google.genai.types.GenerateContentResponse; removed legacy fallback logic and utils check; _wrapped_methods() always returns new WRAPPED_METHODS; loops now iterate over _wrapped_methods(); instrumentation_dependencies() fixed to ("google-genai >= 1.0.0",).
Tests updated to new API
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py, packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
Moved from google.generativeai to google.genai; replaced configuration/GenerativeModel with genai.Client and models; added tests asserting wrapping, dependencies, and wrapped targets for Models and AsyncModels.
Packaging/deps cleanup
packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
Removed google-generativeai from tests and extras; instruments extra now only includes google-genai.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

A hop, a skip—goodbye, old lane,
I nudge the code toward “genai” train.
Wrappers tidy, paths align,
Tests now dance in modern time.
In burrows deep, I stamp with glee—
Dependency carrots: 1.0.0!
Thump-thump, shipped it cleanly. 🥕✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/google-genai-migration-3281

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 0f301fb in 1 minute and 51 seconds. Click for details.
  • Reviewed 145 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:66
  • Draft comment:
    ⚠️ The WRAPPED_METHODS block was removed, yet _wrapped_methods() still returns WRAPPED_METHODS when 'google.genai' is detected. This will cause a NameError. Please reintroduce the new library mapping or update _wrapped_methods accordingly.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:7
  • Draft comment:
    💡 The try/except import for GenerateContentResponse first tries the legacy library. Since the instrumentation should prefer the new googleapis/python-genai, consider swapping the order to attempt importing from 'google.genai' first.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The code shows a clear change adding a new import fallback mechanism. The comment makes a reasonable suggestion about preferring the newer library. However, looking at the full file context, I see that the code actually handles both libraries throughout, with LEGACY_WRAPPED_METHODS for the old library and WRAPPED_METHODS for the new one. The _wrapped_methods() function explicitly checks for google.genai first at L288, suggesting there's already a deliberate strategy for handling the two libraries. The comment may be missing important context about why the import order was chosen this way. There could be backward compatibility reasons for trying the legacy import first. While the suggestion seems reasonable on the surface, the existing code shows a deliberate pattern of handling both libraries, with the newer one actually being preferred in the _wrapped_methods() function. The comment should be deleted as it makes assumptions about the import order without understanding the full context of how the two libraries are handled throughout the code.

Workflow ID: wflow_kOSvYvMPXa4mUT9G

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)

306-319: Async wrapper is not selected for google.genai AsyncModels.generate_content

The condition only checks for a legacy method name (“generate_content_async”). For the new library, AsyncModels.generate_content is a coroutine and must use the async wrapper. As-is, it will wrap an async function with a sync wrapper and break instrumentation.

Select the wrapper based on actual coroutine-ness (or Async* class) by introspecting the target:

@@
-        for wrapped_method in self._wrapped_methods():
-            wrap_package = wrapped_method.get("package")
-            wrap_object = wrapped_method.get("object")
-            wrap_method = wrapped_method.get("method")
-
-            wrap_function_wrapper(
-                wrap_package,
-                f"{wrap_object}.{wrap_method}",
-                (
-                    _awrap(tracer, event_logger, wrapped_method)
-                    if wrap_method == "generate_content_async"
-                    else _wrap(tracer, event_logger, wrapped_method)
-                ),
-            )
+        for wrapped_method in self._wrapped_methods():
+            wrap_package = wrapped_method.get("package")
+            wrap_object = wrapped_method.get("object")
+            wrap_method = wrapped_method.get("method")
+
+            # Pick async wrapper if the target is a coroutine (e.g., AsyncModels.generate_content)
+            import importlib, inspect
+            mod = importlib.import_module(wrap_package)
+            obj = getattr(mod, wrap_object)
+            fn = getattr(obj, wrap_method)
+            is_async = inspect.iscoroutinefunction(fn) or wrap_object.lower().startswith("async") or wrap_method.endswith("_async")
+
+            wrap_function_wrapper(
+                wrap_package,
+                f"{wrap_object}.{wrap_method}",
+                _awrap(tracer, event_logger, wrapped_method) if is_async else _wrap(tracer, event_logger, wrapped_method),
+            )

Note: Ensure importlib and inspect are imported at module scope or inline as shown.

🧹 Nitpick comments (8)
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

64-72: Prevent flaky KeyError on GOOGLE_API_KEY and make fixture independent of autouse ordering

Relying on an autouse fixture to set the env var can still be brittle in pytest’s fixture execution order. Make the fixture resilient by reading the key via getenv once and reusing it in both branches.

Apply this diff within the fixture to avoid KeyError—no behavior change when the env var is present:

@@
-    # Prefer new client over legacy
-    if genai_new:
-        client = genai_new.Client(api_key=os.environ["GOOGLE_API_KEY"])
+    # Prefer new client over legacy
+    api_key = os.environ.get("GOOGLE_API_KEY", "test_api_key")
+    if genai_new:
+        client = genai_new.Client(api_key=api_key)
         return client.models
     elif genai_legacy:
-        genai_legacy.configure(api_key=os.environ["GOOGLE_API_KEY"])
+        genai_legacy.configure(api_key=api_key)
         return genai_legacy.GenerativeModel("gemini-1.5-flash")

Optionally, to guarantee initialization order, make the fixture depend explicitly on the autouse environment fixture:

@pytest.fixture
def genai_client(environment):
    ...
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)

279-285: Return a consistent type from instrumentation_dependencies

Minor consistency: return the same sequence type across branches. Using tuples everywhere avoids surprises.

@@
-        if is_package_installed("google.genai"):
-            return ("google-genai >= 0.1.0",)
-        elif is_package_installed("google.generativeai"):
-            return ["google-generativeai >= 0.5.0"]
-        else:
-            return []
+        if is_package_installed("google.genai"):
+            return ("google-genai >= 0.1.0",)
+        elif is_package_installed("google.generativeai"):
+            return ("google-generativeai >= 0.5.0",)
+        else:
+            return ()

7-13: Avoid runtime imports purely for typing to reduce fragility

If GenerateContentResponse is only used for annotations, consider deferring its import with TYPE_CHECKING or enabling postponed evaluation of annotations. This avoids setting it to None and prevents import churn at runtime.

Option A (preferred): use postponed evaluation

from __future__ import annotations

Option B: TYPE_CHECKING guard

from typing import TYPE_CHECKING
if TYPE_CHECKING:
    from google.generativeai.types.generation_types import GenerateContentResponse as LegacyGenerateContentResponse
    from google.genai.types import GenerateContentResponse as NewGenerateContentResponse
    GenerateContentResponse = LegacyGenerateContentResponse | NewGenerateContentResponse
packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py (5)

1-7: Import pytest to enable conditional skipping

You’re asserting both libraries are present; in typical user environments, only one may be installed. Add pytest and leverage skip conditions in the tests below.

@@
-"""Test that both legacy and new Google GenAI libraries can be imported and instrumented."""
+"""Test that both legacy and new Google GenAI libraries can be imported and instrumented."""
+
+import pytest

9-13: Skip legacy detection test when the legacy package isn’t installed

Asserting presence is brittle; skip instead to keep the suite green across environments.

@@
-def test_legacy_library_detection():
+@pytest.mark.skipif(
+    not is_package_installed("google.generativeai"),
+    reason="google.generativeai not installed",
+)
+def test_legacy_library_detection():
     """Test that the legacy library can be detected."""
-    has_legacy = is_package_installed("google.generativeai")
-    assert has_legacy, "google.generativeai should be installed"
+    assert is_package_installed("google.generativeai")

15-19: Skip new library detection test when the new package isn’t installed

Mirror the legacy case to avoid false negatives when only one lib is present.

@@
-def test_new_library_detection():
+@pytest.mark.skipif(
+    not is_package_installed("google.genai"),
+    reason="google.genai not installed",
+)
+def test_new_library_detection():
     """Test that the new library can be detected."""
-    has_new = is_package_installed("google.genai")
-    assert has_new, "google.genai should be installed"
+    assert is_package_installed("google.genai")

21-34: Skip initialization test when neither library is installed

Keep the test suite environment-agnostic.

@@
-def test_instrumentor_initialization():
+@pytest.mark.skipif(
+    not (is_package_installed("google.genai") or is_package_installed("google.generativeai")),
+    reason="Neither google.genai nor google.generativeai is installed",
+)
+def test_instrumentor_initialization():

36-50: Skip wrapped-methods selection test when neither library is installed

Prevents failures on environments lacking both packages.

@@
-def test_wrapped_methods_selection():
+@pytest.mark.skipif(
+    not (is_package_installed("google.genai") or is_package_installed("google.generativeai")),
+    reason="Neither google.genai nor google.generativeai is installed",
+)
+def test_wrapped_methods_selection():
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 50be41d and 0f301fb.

📒 Files selected for processing (3)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (2 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: Python code must conform to Flake8 linting rules
Do not hardcode API keys in source code; read them from environment variables or a secure vault

Files:

  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)
  • GoogleGenerativeAiInstrumentor (271-328)
  • instrumentation_dependencies (279-285)
  • _wrapped_methods (287-293)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Lint
  • GitHub Check: Build Packages (3.11)
🔇 Additional comments (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

5-13: Dual import detection for legacy/new clients looks good

Graceful fallbacks are correct and keep tests portable across environments.

- Remove all backward compatibility code and legacy imports
- Update imports to use only google-genai library
- Simplify instrumentor logic to use new library exclusively
- Update pyproject.toml to depend only on google-genai >= 1.0.0
- Remove legacy test compatibility checks
- Add comprehensive tests for new library instrumentation
- Clean up unused imports and fix linting issues

This is a breaking change that removes support for the deprecated google-generativeai library.
Users must upgrade to google-genai >= 1.0.0.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 538d9f8 in 1 minute and 38 seconds. Click for details.
  • Reviewed 270 lines of code in 6 files
  • Skipped 1 files when reviewing.
  • Skipped posting 6 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:7
  • Draft comment:
    Legacy fallback for importing GenerateContentResponse has been removed. Ensure this change (dropping support for 'google.generativeai') is intentional given the PR description mentions backward compatibility.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that the removal of a legacy fallback is intentional, which is against the rules. It also references the PR description, which we should not assume needs to be filled out. Therefore, this comment should be removed.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:257
  • Draft comment:
    The instrumentation_dependencies function now always returns only 'google-genai >= 1.0.0', eliminating legacy support. Update documentation if legacy support is still desired.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is suggesting an update to documentation based on a change in the function's behavior. It doesn't ask for confirmation of intention or suggest a specific code change. It also doesn't ask for tests to be written. However, it does imply that the PR author should update documentation, which is not allowed by the rules.
3. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:261
  • Draft comment:
    The _wrapped_methods function now only returns WRAPPED_METHODS for 'google.genai', removing the legacy branch. Ensure this aligns with intended support and update the PR description if needed.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure that the change aligns with intended support, which is not allowed. It also suggests updating the PR description, which is against the rules. The comment does not provide a specific code suggestion or ask for a specific test to be written.
4. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py:5
  • Draft comment:
    The import has been switched to 'google.genai.types'. Confirm that removing legacy support here is intentional.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
5. packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml:47
  • Draft comment:
    The extras now only include 'google-genai', with 'google-generativeai' removed. Ensure that documentation reflects this removal of legacy dependency.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% This comment violates several rules. It's asking for documentation updates which isn't a code change. It's also asking to "ensure" something, which falls under the category of verification requests we should avoid. The actual code change (removing the dependency) is clear and intentional. The comment does point out a real change in the package's interface that users might need to know about. Documentation updates could be important for package users. While documentation is important, our rules explicitly state not to make comments that ask for verification or documentation updates. The PR author likely knows what documentation needs updating. This comment should be deleted as it violates our rules about not asking for verification/ensures and not commenting on documentation updates.
6. General:1
  • Draft comment:
    The PR description mentions backward compatibility for both libraries, yet the legacy support code paths have been removed. Update the PR description and documentation to match the current behavior.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_SbWKnB9MD0D6OY1C

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (2)

5-5: Avoid hard dependency on google-genai in a shared module used by both clients.

Unconditional import of GenerateContentResponse from google.genai.types forces the new library to be present even if only the legacy client is used. If fallback to legacy is still supported, decouple the type import from runtime by using a TYPE_CHECKING guard. This keeps typing without breaking environments that only have the legacy lib.

Is legacy fallback still expected? If yes, consider this change:

-from google.genai.types import GenerateContentResponse
+from typing import TYPE_CHECKING, Any
+
+# Avoid a hard dependency on google-genai solely for typing.
+if TYPE_CHECKING:
+    from google.genai.types import GenerateContentResponse  # pragma: no cover
+else:
+    GenerateContentResponse = Any  # type: ignore[misc, assignment]

60-69: Guard against finish_reason being None or non-enum.

Depending on the library/version, candidate.finish_reason may be an enum, string, or None. Accessing .name can raise AttributeError. Consider a defensive conversion to a string.

Apply this tweak:

-                finish_reason=candidate.finish_reason.name,
+                finish_reason=getattr(candidate.finish_reason, "name", str(candidate.finish_reason or "UNKNOWN")),
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

56-57: Fixture returns Models; consider exposing async models too for broader test coverage.

Returning client.models works for current tests. If you plan to add async coverage, you may also return or add a fixture for AsyncModels (if available via the client API), to validate async wrapping end-to-end.

packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1)

8-41: Also assert async method wrapping to strengthen coverage.

Great checks for Models.generate_content. Add AsyncModels.generate_content_async pre/post assertions to ensure the async path is wrapped and unwrapped too.

Apply this diff:

     # Import the library
     from google import genai
-    from google.genai.models import Models
+    from google.genai.models import Models, AsyncModels
@@
     # Verify methods are not wrapped initially
     assert not hasattr(Models.generate_content, '__wrapped__')
+    assert not hasattr(AsyncModels.generate_content_async, '__wrapped__')
@@
         # Verify methods are now wrapped
         assert hasattr(Models.generate_content, '__wrapped__')
+        assert hasattr(AsyncModels.generate_content_async, '__wrapped__')
@@
         # Verify methods are unwrapped
         assert not hasattr(Models.generate_content, '__wrapped__')
+        assert not hasattr(AsyncModels.generate_content_async, '__wrapped__')
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0f301fb and 538d9f8.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-google-generativeai/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (1 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (2 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/init.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: Python code must conform to Flake8 linting rules
Do not hardcode API keys in source code; read them from environment variables or a secure vault

Files:

  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)
  • GoogleGenerativeAiInstrumentor (249-296)
  • instrumentation_dependencies (257-258)
  • _wrapped_methods (260-261)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Lint
  • GitHub Check: Build Packages (3.11)
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

5-5: LGTM: correct migration to the new import path.

The switch to from google import genai aligns with the new client package structure.

packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (2)

44-50: LGTM: dependency assertion matches instrumentor contract.

The test correctly validates that the instrumentor advertises only google-genai >= 1.0.0 as a dependency.


52-67: LGTM: wrapped-methods introspection is precise.

Asserting both Models and AsyncModels come from google.genai keeps the test focused and resilient.


[tool.poetry.extras]
instruments = ["google-generativeai", "google-genai"]
instruments = ["google-genai"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Confirm intent: dropping legacy package from extras may surprise users relying on [instruments] to install both.

If backward compatibility with google-generativeai is still intended at runtime, consider keeping it in the instruments extra (extras are optional; this won't force installations but helps users who expect it). Otherwise, update docs to clarify only google-genai is installed via extras.

You can verify how often users rely on the extras and whether legacy support is still present in the code by running:

If you decide to keep legacy in extras, here’s the minimal change:

 [tool.poetry.extras]
-instruments = ["google-genai"]
+instruments = ["google-genai", "google-generativeai"]

🏁 Script executed:

#!/bin/bash
# Inspect instrumentation dependencies and wrappers to confirm legacy support
rg -nP -C2 'def instrumentation_dependencies\('
rg -nP -C3 'WRAPPED_METHODS\s*='
rg -nP -C3 'google\.genai|google\.generativeai' opentelemetry/instrumentation/google_generativeai

Length of output: 39844


Confirm extras match implementation and update docs
The instrumentation_dependencies method in google_generativeai/__init__.py only returns "google-genai >= 1.0.0", and all wrapped methods reference google.genai.*—not the legacy google.generativeai package. Dropping the old package from the instruments extra is therefore correct.

Please update documentation (e.g. README, docs site) to state that the instruments extra now only installs google-genai and that legacy google-generativeai support has been removed.

🤖 Prompt for AI Agents
In packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
around line 47, the instruments extra currently lists the legacy package but the
implementation only depends on "google-genai >= 1.0.0" and references
google.genai.*; update project documentation (README.md, docs site pages, and
any packaging or changelog entries) to state that the instruments extra now
installs only google-genai (not google-generativeai), that legacy
google-generativeai support has been removed, and confirm the extras/spec in
pyproject.toml matches the implementation (ensure instruments = ["google-genai"]
and update any usage examples, install commands, and migration notes
accordingly).

- Run poetry lock to sync with pyproject.toml changes
- Update to latest compatible versions of dependencies
- Fixes CI build failures due to lock file mismatch

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skipped PR review on 97fd2ee because no changed files had a supported extension. If you think this was in error, please contact us and we'll fix it right away.

@nirga nirga changed the title fix: migrate Google Generative AI instrumentation to googleapis/python-genai fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai Aug 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🐛 Bug Report: Migrate to googleapis/python-genai

2 participants