-
Notifications
You must be signed in to change notification settings - Fork 868
fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai #3282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…n-genai - Add backward compatibility support for both google-generativeai and google-genai libraries - Fix import issues with graceful fallback for GenerateContentResponse - Remove duplicate WRAPPED_METHODS definitions - Fix _instrument method to use self._wrapped_methods() instead of hardcoded list - Update test configuration to support both client libraries with preference for new google-genai - Add comprehensive compatibility tests to verify both libraries work correctly - Closes #3281 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (1)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including You can disable this status message by setting the WalkthroughMigration of the Google Generative AI instrumentation from google-generativeai to google-genai: updated imports, simplified wrapped methods (removed legacy fallbacks), fixed instrumentation dependency to google-genai >= 1.0.0, adjusted tests to the new Client API, and updated pyproject extras and test dependencies. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to 0f301fb in 1 minute and 51 seconds. Click for details.
- Reviewed
145lines of code in3files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:66
- Draft comment:
⚠️ The WRAPPED_METHODS block was removed, yet _wrapped_methods() still returns WRAPPED_METHODS when 'google.genai' is detected. This will cause a NameError. Please reintroduce the new library mapping or update _wrapped_methods accordingly. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:7
- Draft comment:
💡 The try/except import for GenerateContentResponse first tries the legacy library. Since the instrumentation should prefer the new googleapis/python-genai, consider swapping the order to attempt importing from 'google.genai' first. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The code shows a clear change adding a new import fallback mechanism. The comment makes a reasonable suggestion about preferring the newer library. However, looking at the full file context, I see that the code actually handles both libraries throughout, with LEGACY_WRAPPED_METHODS for the old library and WRAPPED_METHODS for the new one. The _wrapped_methods() function explicitly checks for google.genai first at L288, suggesting there's already a deliberate strategy for handling the two libraries. The comment may be missing important context about why the import order was chosen this way. There could be backward compatibility reasons for trying the legacy import first. While the suggestion seems reasonable on the surface, the existing code shows a deliberate pattern of handling both libraries, with the newer one actually being preferred in the _wrapped_methods() function. The comment should be deleted as it makes assumptions about the import order without understanding the full context of how the two libraries are handled throughout the code.
Workflow ID: wflow_kOSvYvMPXa4mUT9G
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)
306-319: Async wrapper is not selected for google.genai AsyncModels.generate_contentThe condition only checks for a legacy method name (“generate_content_async”). For the new library, AsyncModels.generate_content is a coroutine and must use the async wrapper. As-is, it will wrap an async function with a sync wrapper and break instrumentation.
Select the wrapper based on actual coroutine-ness (or Async* class) by introspecting the target:
@@ - for wrapped_method in self._wrapped_methods(): - wrap_package = wrapped_method.get("package") - wrap_object = wrapped_method.get("object") - wrap_method = wrapped_method.get("method") - - wrap_function_wrapper( - wrap_package, - f"{wrap_object}.{wrap_method}", - ( - _awrap(tracer, event_logger, wrapped_method) - if wrap_method == "generate_content_async" - else _wrap(tracer, event_logger, wrapped_method) - ), - ) + for wrapped_method in self._wrapped_methods(): + wrap_package = wrapped_method.get("package") + wrap_object = wrapped_method.get("object") + wrap_method = wrapped_method.get("method") + + # Pick async wrapper if the target is a coroutine (e.g., AsyncModels.generate_content) + import importlib, inspect + mod = importlib.import_module(wrap_package) + obj = getattr(mod, wrap_object) + fn = getattr(obj, wrap_method) + is_async = inspect.iscoroutinefunction(fn) or wrap_object.lower().startswith("async") or wrap_method.endswith("_async") + + wrap_function_wrapper( + wrap_package, + f"{wrap_object}.{wrap_method}", + _awrap(tracer, event_logger, wrapped_method) if is_async else _wrap(tracer, event_logger, wrapped_method), + )Note: Ensure importlib and inspect are imported at module scope or inline as shown.
🧹 Nitpick comments (8)
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)
64-72: Prevent flaky KeyError on GOOGLE_API_KEY and make fixture independent of autouse orderingRelying on an autouse fixture to set the env var can still be brittle in pytest’s fixture execution order. Make the fixture resilient by reading the key via getenv once and reusing it in both branches.
Apply this diff within the fixture to avoid KeyError—no behavior change when the env var is present:
@@ - # Prefer new client over legacy - if genai_new: - client = genai_new.Client(api_key=os.environ["GOOGLE_API_KEY"]) + # Prefer new client over legacy + api_key = os.environ.get("GOOGLE_API_KEY", "test_api_key") + if genai_new: + client = genai_new.Client(api_key=api_key) return client.models elif genai_legacy: - genai_legacy.configure(api_key=os.environ["GOOGLE_API_KEY"]) + genai_legacy.configure(api_key=api_key) return genai_legacy.GenerativeModel("gemini-1.5-flash")Optionally, to guarantee initialization order, make the fixture depend explicitly on the autouse environment fixture:
@pytest.fixture def genai_client(environment): ...packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)
279-285: Return a consistent type from instrumentation_dependenciesMinor consistency: return the same sequence type across branches. Using tuples everywhere avoids surprises.
@@ - if is_package_installed("google.genai"): - return ("google-genai >= 0.1.0",) - elif is_package_installed("google.generativeai"): - return ["google-generativeai >= 0.5.0"] - else: - return [] + if is_package_installed("google.genai"): + return ("google-genai >= 0.1.0",) + elif is_package_installed("google.generativeai"): + return ("google-generativeai >= 0.5.0",) + else: + return ()
7-13: Avoid runtime imports purely for typing to reduce fragilityIf GenerateContentResponse is only used for annotations, consider deferring its import with TYPE_CHECKING or enabling postponed evaluation of annotations. This avoids setting it to None and prevents import churn at runtime.
Option A (preferred): use postponed evaluation
from __future__ import annotationsOption B: TYPE_CHECKING guard
from typing import TYPE_CHECKING if TYPE_CHECKING: from google.generativeai.types.generation_types import GenerateContentResponse as LegacyGenerateContentResponse from google.genai.types import GenerateContentResponse as NewGenerateContentResponse GenerateContentResponse = LegacyGenerateContentResponse | NewGenerateContentResponsepackages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py (5)
1-7: Import pytest to enable conditional skippingYou’re asserting both libraries are present; in typical user environments, only one may be installed. Add pytest and leverage skip conditions in the tests below.
@@ -"""Test that both legacy and new Google GenAI libraries can be imported and instrumented.""" +"""Test that both legacy and new Google GenAI libraries can be imported and instrumented.""" + +import pytest
9-13: Skip legacy detection test when the legacy package isn’t installedAsserting presence is brittle; skip instead to keep the suite green across environments.
@@ -def test_legacy_library_detection(): +@pytest.mark.skipif( + not is_package_installed("google.generativeai"), + reason="google.generativeai not installed", +) +def test_legacy_library_detection(): """Test that the legacy library can be detected.""" - has_legacy = is_package_installed("google.generativeai") - assert has_legacy, "google.generativeai should be installed" + assert is_package_installed("google.generativeai")
15-19: Skip new library detection test when the new package isn’t installedMirror the legacy case to avoid false negatives when only one lib is present.
@@ -def test_new_library_detection(): +@pytest.mark.skipif( + not is_package_installed("google.genai"), + reason="google.genai not installed", +) +def test_new_library_detection(): """Test that the new library can be detected.""" - has_new = is_package_installed("google.genai") - assert has_new, "google.genai should be installed" + assert is_package_installed("google.genai")
21-34: Skip initialization test when neither library is installedKeep the test suite environment-agnostic.
@@ -def test_instrumentor_initialization(): +@pytest.mark.skipif( + not (is_package_installed("google.genai") or is_package_installed("google.generativeai")), + reason="Neither google.genai nor google.generativeai is installed", +) +def test_instrumentor_initialization():
36-50: Skip wrapped-methods selection test when neither library is installedPrevents failures on environments lacking both packages.
@@ -def test_wrapped_methods_selection(): +@pytest.mark.skipif( + not (is_package_installed("google.genai") or is_package_installed("google.generativeai")), + reason="Neither google.genai nor google.generativeai is installed", +) +def test_wrapped_methods_selection():
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py(2 hunks)packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py(2 hunks)packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: Python code must conform to Flake8 linting rules
Do not hardcode API keys in source code; read them from environment variables or a secure vault
Files:
packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.pypackages/opentelemetry-instrumentation-google-generativeai/tests/conftest.pypackages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/test_import_compatibility.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)
GoogleGenerativeAiInstrumentor(271-328)instrumentation_dependencies(279-285)_wrapped_methods(287-293)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Lint
- GitHub Check: Build Packages (3.11)
🔇 Additional comments (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)
5-13: Dual import detection for legacy/new clients looks goodGraceful fallbacks are correct and keep tests portable across environments.
- Remove all backward compatibility code and legacy imports - Update imports to use only google-genai library - Simplify instrumentor logic to use new library exclusively - Update pyproject.toml to depend only on google-genai >= 1.0.0 - Remove legacy test compatibility checks - Add comprehensive tests for new library instrumentation - Clean up unused imports and fix linting issues This is a breaking change that removes support for the deprecated google-generativeai library. Users must upgrade to google-genai >= 1.0.0. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 538d9f8 in 1 minute and 38 seconds. Click for details.
- Reviewed
270lines of code in6files - Skipped
1files when reviewing. - Skipped posting
6draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:7
- Draft comment:
Legacy fallback for importing GenerateContentResponse has been removed. Ensure this change (dropping support for 'google.generativeai') is intentional given the PR description mentions backward compatibility. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the author to ensure that the removal of a legacy fallback is intentional, which is against the rules. It also references the PR description, which we should not assume needs to be filled out. Therefore, this comment should be removed.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:257
- Draft comment:
The instrumentation_dependencies function now always returns only 'google-genai >= 1.0.0', eliminating legacy support. Update documentation if legacy support is still desired. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is suggesting an update to documentation based on a change in the function's behavior. It doesn't ask for confirmation of intention or suggest a specific code change. It also doesn't ask for tests to be written. However, it does imply that the PR author should update documentation, which is not allowed by the rules.
3. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:261
- Draft comment:
The _wrapped_methods function now only returns WRAPPED_METHODS for 'google.genai', removing the legacy branch. Ensure this aligns with intended support and update the PR description if needed. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to ensure that the change aligns with intended support, which is not allowed. It also suggests updating the PR description, which is against the rules. The comment does not provide a specific code suggestion or ask for a specific test to be written.
4. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py:5
- Draft comment:
The import has been switched to 'google.genai.types'. Confirm that removing legacy support here is intentional. - Reason this comment was not posted:
Comment looked like it was already resolved.
5. packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml:47
- Draft comment:
The extras now only include 'google-genai', with 'google-generativeai' removed. Ensure that documentation reflects this removal of legacy dependency. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% This comment violates several rules. It's asking for documentation updates which isn't a code change. It's also asking to "ensure" something, which falls under the category of verification requests we should avoid. The actual code change (removing the dependency) is clear and intentional. The comment does point out a real change in the package's interface that users might need to know about. Documentation updates could be important for package users. While documentation is important, our rules explicitly state not to make comments that ask for verification or documentation updates. The PR author likely knows what documentation needs updating. This comment should be deleted as it violates our rules about not asking for verification/ensures and not commenting on documentation updates.
6. General:1
- Draft comment:
The PR description mentions backward compatibility for both libraries, yet the legacy support code paths have been removed. Update the PR description and documentation to match the current behavior. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_SbWKnB9MD0D6OY1C
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (2)
5-5: Avoid hard dependency on google-genai in a shared module used by both clients.Unconditional import of
GenerateContentResponsefromgoogle.genai.typesforces the new library to be present even if only the legacy client is used. If fallback to legacy is still supported, decouple the type import from runtime by using a TYPE_CHECKING guard. This keeps typing without breaking environments that only have the legacy lib.Is legacy fallback still expected? If yes, consider this change:
-from google.genai.types import GenerateContentResponse +from typing import TYPE_CHECKING, Any + +# Avoid a hard dependency on google-genai solely for typing. +if TYPE_CHECKING: + from google.genai.types import GenerateContentResponse # pragma: no cover +else: + GenerateContentResponse = Any # type: ignore[misc, assignment]
60-69: Guard againstfinish_reasonbeing None or non-enum.Depending on the library/version,
candidate.finish_reasonmay be an enum, string, or None. Accessing.namecan raise AttributeError. Consider a defensive conversion to a string.Apply this tweak:
- finish_reason=candidate.finish_reason.name, + finish_reason=getattr(candidate.finish_reason, "name", str(candidate.finish_reason or "UNKNOWN")),packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)
56-57: Fixture returns Models; consider exposing async models too for broader test coverage.Returning
client.modelsworks for current tests. If you plan to add async coverage, you may also return or add a fixture forAsyncModels(if available via the client API), to validate async wrapping end-to-end.packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1)
8-41: Also assert async method wrapping to strengthen coverage.Great checks for
Models.generate_content. AddAsyncModels.generate_content_asyncpre/post assertions to ensure the async path is wrapped and unwrapped too.Apply this diff:
# Import the library from google import genai - from google.genai.models import Models + from google.genai.models import Models, AsyncModels @@ # Verify methods are not wrapped initially assert not hasattr(Models.generate_content, '__wrapped__') + assert not hasattr(AsyncModels.generate_content_async, '__wrapped__') @@ # Verify methods are now wrapped assert hasattr(Models.generate_content, '__wrapped__') + assert hasattr(AsyncModels.generate_content_async, '__wrapped__') @@ # Verify methods are unwrapped assert not hasattr(Models.generate_content, '__wrapped__') + assert not hasattr(AsyncModels.generate_content_async, '__wrapped__')
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-google-generativeai/poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py(3 hunks)packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py(1 hunks)packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml(1 hunks)packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py(2 hunks)packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/init.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: Python code must conform to Flake8 linting rules
Do not hardcode API keys in source code; read them from environment variables or a secure vault
Files:
packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.pypackages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.pypackages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)
GoogleGenerativeAiInstrumentor(249-296)instrumentation_dependencies(257-258)_wrapped_methods(260-261)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Lint
- GitHub Check: Build Packages (3.11)
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)
5-5: LGTM: correct migration to the new import path.The switch to
from google import genaialigns with the new client package structure.packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (2)
44-50: LGTM: dependency assertion matches instrumentor contract.The test correctly validates that the instrumentor advertises only
google-genai >= 1.0.0as a dependency.
52-67: LGTM: wrapped-methods introspection is precise.Asserting both
ModelsandAsyncModelscome fromgoogle.genaikeeps the test focused and resilient.
|
|
||
| [tool.poetry.extras] | ||
| instruments = ["google-generativeai", "google-genai"] | ||
| instruments = ["google-genai"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Confirm intent: dropping legacy package from extras may surprise users relying on [instruments] to install both.
If backward compatibility with google-generativeai is still intended at runtime, consider keeping it in the instruments extra (extras are optional; this won't force installations but helps users who expect it). Otherwise, update docs to clarify only google-genai is installed via extras.
You can verify how often users rely on the extras and whether legacy support is still present in the code by running:
If you decide to keep legacy in extras, here’s the minimal change:
[tool.poetry.extras]
-instruments = ["google-genai"]
+instruments = ["google-genai", "google-generativeai"]🏁 Script executed:
#!/bin/bash
# Inspect instrumentation dependencies and wrappers to confirm legacy support
rg -nP -C2 'def instrumentation_dependencies\('
rg -nP -C3 'WRAPPED_METHODS\s*='
rg -nP -C3 'google\.genai|google\.generativeai' opentelemetry/instrumentation/google_generativeaiLength of output: 39844
Confirm extras match implementation and update docs
The instrumentation_dependencies method in google_generativeai/__init__.py only returns "google-genai >= 1.0.0", and all wrapped methods reference google.genai.*—not the legacy google.generativeai package. Dropping the old package from the instruments extra is therefore correct.
Please update documentation (e.g. README, docs site) to state that the instruments extra now only installs google-genai and that legacy google-generativeai support has been removed.
🤖 Prompt for AI Agents
In packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
around line 47, the instruments extra currently lists the legacy package but the
implementation only depends on "google-genai >= 1.0.0" and references
google.genai.*; update project documentation (README.md, docs site pages, and
any packaging or changelog entries) to state that the instruments extra now
installs only google-genai (not google-generativeai), that legacy
google-generativeai support has been removed, and confirm the extras/spec in
pyproject.toml matches the implementation (ensure instruments = ["google-genai"]
and update any usage examples, install commands, and migration notes
accordingly).
- Run poetry lock to sync with pyproject.toml changes - Update to latest compatible versions of dependencies - Fixes CI build failures due to lock file mismatch 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skipped PR review on 97fd2ee because no changed files had a supported extension. If you think this was in error, please contact us and we'll fix it right away.
Summary
google-generativeaito newgoogleapis/python-genailibraryChanges Made
GenerateContentResponsetypeWRAPPED_METHODSdefinitions_instrumentmethod to useself._wrapped_methods()instead of hardcoded listgoogle-genaiTest Results
Migration Details
The instrumentation now:
google-genailibrary when availablegoogle-generativeaifor backward compatibility🤖 Generated with Claude Code
Important
Migrates Google Generative AI instrumentation to
googleapis/python-genai, ensuring backward compatibility and fixing bugs, with updated and new tests for both libraries.google-generativeaitogoogleapis/python-genai.google-genaibut falls back togoogle-generativeaiif needed.__init__.pyandevent_emitter.py._wrapped_methods()for dynamic method selection in__init__.py.GenerateContentResponse.WRAPPED_METHODSdefinitions.conftest.pyand addstest_new_library_instrumentation.py.google-generativeaifrompyproject.tomldependencies.This description was created by
for 538d9f8. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit