-
Notifications
You must be signed in to change notification settings - Fork 868
fix(ollama): support Older Version Ollama #3501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes expand Ollama instrumentation compatibility to older versions by relaxing the version constraint from Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Possibly related issues
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
resolved #3325 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to 46620ee in 1 minute and 56 seconds. Click for details.
- Reviewed
71lines of code in2files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:48
- Draft comment:
Updated _instruments version range from 'ollama >= 0.4.0, < 1' to 'ollama >= 0.1.0, < 1' to support older versions. Ensure that this broader range is intentional and that older versions work correctly with the instrumentation. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to ensure that the broader version range is intentional and that older versions work correctly. This falls under asking the author to ensure behavior is intended and tested, which violates the rules.
2. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:500
- Draft comment:
The try/except block around wrapping _copy_messages guards against missing attributes in older Ollama versions. Consider logging a debug message on exceptions to aid troubleshooting, rather than silently passing. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 25% vs. threshold = 50% This comment is suggesting a code quality improvement (adding debug logging) rather than pointing out a bug or required change. The rules state that comments suggesting code quality refactors are good "only if they are actionable and clear." However, the rules also emphasize "Do NOT comment unless there is clearly a code change required." The current code works correctly - it silently handles missing attributes, which is a valid design choice. Adding logging is a "nice to have" rather than a "must have." The comment uses "Consider" which is suggestive rather than directive. This seems like an optional enhancement rather than a required change. The comment is actionable and clear - it specifically suggests adding debug logging. This could be considered a legitimate code quality improvement that aids troubleshooting. Silent failures can make debugging difficult, and logging at debug level wouldn't be intrusive. This might fall under the category of "actionable and clear" refactoring suggestions that are allowed. While the suggestion is clear, it's still optional rather than required. The code is handling expected conditions (older versions not having this function), not actual errors. The comment says "Consider" which indicates it's a suggestion, not a requirement. The rules emphasize removing comments unless there's "clearly a code change required" and to "assume the comment is incorrect or not useful" by default. Silent handling of version differences is a common and acceptable pattern. This comment suggests an optional code quality improvement (adding debug logging) rather than identifying a required change. While it's actionable, it's not necessary - the code correctly handles version differences. Following the principle of removing comments unless there's strong evidence they're required, this should be deleted.
3. packages/opentelemetry-instrumentation-ollama/tests/test_chat.py:159
- Draft comment:
Test cases now supply tool_calls 'arguments' as a dictionary instead of a JSON string. Confirm that the instrumentation handles both inputs correctly and that downstream expectations (e.g. serialized output) remain consistent. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to confirm that the instrumentation handles both inputs correctly and that downstream expectations remain consistent. This falls under the category of asking the author to ensure behavior is intended and tested, which violates the rules.
Workflow ID: wflow_HnOkKySQvJp6JobJ
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py (1)
500-514: Simplify redundant nested try-except blocks.The nested try-except structure catches the same exceptions (
ImportError,AttributeError) at both levels, and thehasattrcheck already handles the case where_copy_messagesdoesn't exist. This can be simplified.Apply this diff to simplify the logic:
- # Patch _copy_messages to sanitize tool_calls arguments before Pydantic validation - # Only wrap if it exists (not available in older versions of ollama) - try: - import ollama._client as ollama_client - if hasattr(ollama_client, "_copy_messages"): - try: - wrap_function_wrapper( - "ollama._client", - "_copy_messages", - _sanitize_copy_messages, - ) - except (AttributeError, ImportError): - pass - except (ImportError, AttributeError): - # _copy_messages not available in older versions, skip it - pass + # Patch _copy_messages to sanitize tool_calls arguments before Pydantic validation + # Only wrap if it exists (not available in older versions of ollama) + try: + import ollama._client as ollama_client + if hasattr(ollama_client, "_copy_messages"): + wrap_function_wrapper( + "ollama._client", + "_copy_messages", + _sanitize_copy_messages, + ) + except (ImportError, AttributeError): + # _copy_messages not available in older versions, skip it + pass
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py(2 hunks)packages/opentelemetry-instrumentation-ollama/tests/test_chat.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.pypackages/opentelemetry-instrumentation-ollama/tests/test_chat.py
🧬 Code graph analysis (1)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py (1)
packages/opentelemetry-instrumentation-ollama/tests/conftest.py (1)
ollama_client(70-71)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Test Packages (3.10)
- GitHub Check: Test Packages (3.12)
- GitHub Check: Test Packages (3.11)
- GitHub Check: Build Packages (3.11)
- GitHub Check: Lint
| logger = logging.getLogger(__name__) | ||
|
|
||
| _instruments = ("ollama >= 0.4.0, < 1",) | ||
| _instruments = ("ollama >= 0.1.0, < 1",) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Ollama Python client breaking changes version history 0.1.0 0.4.0
💡 Result:
Summary of notable breaking / incompatible changes in the Ollama Python client from v0.1.0 → v0.4.0 (high‑level):
-
v0.4.0 (Nov 2024): Introduced function-calling / “functions as tools” API, full typing support, and structured outputs — this changed how you pass callable tools and how outputs can be represented/consumed by client code (migration required for code that used older tool/call patterns). [1][2]
-
v0.2.x → v0.3.x (mid‑2024): Added general tool/MCP support and improved chat/history behavior (these enhancements changed request/response shapes and chat context handling for clients). [3]
-
Compatibility note / found issues: After 0.4.x there were compatibility problems reported with some integrations (example: langchain‑ollama using ollama==0.4.3 caused validation errors), showing downstream libraries may need updates when upgrading the client. Test integrations after upgrading. [4]
Sources
- Ollama blog: "Ollama Python library 0.4 with function calling improvements" (Nov 25, 2024). [1]
- Ollama blog (site index showing 0.4 announcement). [2]
- Release history listing for the ollama Python package (shows v0.2.0, v0.3.0, v0.4.0 timeline). [3]
- GitHub issue reporting integration break with langchain on ollama 0.4.3. [4]
If you want, I can:
- extract exact API/method changes and example before→after code snippets (so you can migrate), or
- gather the full changelog entries for each patch/minor release between 0.1.0 and 0.4.0.
🌐 Web query:
latest stable Ollama Python package version 2024 2025
💡 Result:
The latest stable Ollama Python package version is 0.6.1 (released Nov 13, 2025). [1][2]
Sources:
- Ollama GitHub / PyPI listings. [1] GitHub: ollama/ollama-python. [2] PyPI release history / release tracker.
🏁 Script executed:
# Inspect the instrumentation implementation to understand what APIs are being used
find . -type f -name "*.py" -path "*ollama*" | head -20Repository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Check the instrumentation code and test files
ls -la packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/Repository: traceloop/openllmetry
Length of output: 186
Version constraint relaxation introduces compatibility risk due to breaking API changes.
Relaxing the constraint from >= 0.4.0 to >= 0.1.0 is problematic: Ollama's Python client underwent significant breaking changes across these versions:
- v0.4.0 (Nov 2024): Introduced function-calling and structured output APIs with incompatible tool/call patterns requiring code migration
- v0.2.x–v0.3.x (mid-2024): Added tool/MCP support and changed request/response shapes, including chat context handling
Supporting versions 0.1.0–0.3.x requires the instrumentation to handle multiple incompatible API variants. Additionally, the latest stable Ollama version is 0.6.1 (Nov 2025), significantly newer than the prior constraint of 0.4.0. If this relaxation is intentional, verify that:
- The instrumentation implements version-aware handling for API differences
- Tests cover at least the boundary versions (0.1.0, 0.3.x, 0.4.0, and current 0.6.1)
- Downstream compatibility is confirmed (e.g., langchain-ollama integration issues were reported with 0.4.3)
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py
around line 48, the instrumentation broadens the supported Ollama versions to
"ollama >= 0.1.0, < 1", which introduces compatibility risk due to breaking API
changes across 0.1.x–0.6.x; either tighten the constraint back to a safe minimum
(e.g., "ollama >= 0.4.0, < 1") or implement version-aware handling and tests:
detect ollama.__version__ at runtime, branch logic to support distinct
request/response shapes for the old (0.1–0.3), mid (0.4.x) and current (0.6.x)
APIs, add unit/integration tests covering boundary versions (0.1.0, 0.3.x,
0.4.0, 0.6.1) and update the package metadata and changelog to document
supported versions and any required migrations.
| "function": { | ||
| "name": "get_current_weather", | ||
| "arguments": '{"location": "San Francisco"}', | ||
| "arguments": {"location": "San Francisco"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the test file and understand the context around the specified lines
cd packages/opentelemetry-instrumentation-ollama/tests && wc -l test_chat.pyRepository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Read the relevant sections of the test file
sed -n '155,175p' packages/opentelemetry-instrumentation-ollama/tests/test_chat.pyRepository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Also check lines 221 and 296
sed -n '215,230p' packages/opentelemetry-instrumentation-ollama/tests/test_chat.pyRepository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Check line 296
sed -n '290,305p' packages/opentelemetry-instrumentation-ollama/tests/test_chat.pyRepository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Search for _sanitize_copy_messages function to understand what preprocessing is done
rg -n "_sanitize_copy_messages" --type=py -A 10Repository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Search for tool_calls arguments handling in the codebase
rg -n "tool_calls.*arguments" --type=py -C 3Repository: traceloop/openllmetry
Length of output: 186
🏁 Script executed:
# Look for version constraints and compatibility handling
rg -n "ollama.*version\|>= 0\." --type=py -C 2 | head -50Repository: traceloop/openllmetry
Length of output: 186
🌐 Web query:
Ollama version history tool_calls arguments format compatibility 0.1.0 0.3.x
💡 Result:
Short answer: Not compatible — Ollama added first-class tool calling in the 0.3.x series (July 2024) and its responses use a new tool_calls structure (assistant.message.tool_calls → items like { function: { name, arguments } }). Versions around 0.1.0 don’t include that tool_calls API, so code expecting the 0.3.x format will not work on 0.1.x. See Ollama’s tool-support blog and docs and the GitHub discussion showing the {function:{name,arguments}} shape and related compatibility issues. [1][2][3]
Sources
- Ollama blog: Tool support / tool calling (announcement, example showing message.tool_calls). [1]
- Ollama docs: Tool calling examples using tc.function.arguments and message.tool_calls. [2]
- GitHub issue demonstrating tool_calls response shape and compatibility concerns. [3]
🌐 Web query:
Ollama API tool_calls arguments dict vs JSON string
💡 Result:
Short answer: Ollama's server often returns tool call "arguments" as a JSON string (e.g. '{"city":"NYC"}'), while client libraries (ollama-python, Pydantic models) expect a dict. That mismatch causes validation errors; the fix is to parse the string to a dict before using it (or patch the client to accept/deserialize strings). [1][2][3]
What to do (Python):
- Detect and parse: if isinstance(tc.function.arguments, str): arguments = json.loads(tc.function.arguments)
- Example:
import json
args = tc.function.arguments
if isinstance(args, str):
args = json.loads(args)
result = my_tool(**args)
Notes:
- This behavior is documented/observed in the Ollama docs and reported in the ollama-python GitHub issue; the OpenAI-compat layer also converts internal maps to JSON strings for the OpenAI-format output. [1][2][3]
🌐 Web query:
Ollama instrumentation opentelemetry version constraint breaking changes
💡 Result:
Short answer: Ollama-specific OpenTelemetry instrumentation packages have strict upstream version constraints (they track OpenTelemetry contrib releases), so when OpenTelemetry SDK/semconv/etc. advance you can hit compatibility/breaking issues — update either the opentelemetry packages to the exact versions the ollama instrumentation expects or install a matching opentelemetry-instrumentation-ollama release. See the instrumentation package releases and OpenTelemetry versioning guidance. [1][2][3]
What to do (brief)
- Check the opentelemetry-instrumentation-ollama release you’re using and its required opentelemetry package versions (install the matching OTEL API/SDK/semconv). [1]
- If you need a newer OpenTelemetry, upgrade to a newer opentelemetry-instrumentation-ollama release that targets that OTEL version (or pin OTEL to the older compatible version). [1][2]
- If you hit missing/changed API or semantic-convention fields, follow OpenTelemetry’s versioning/compatibility notes — contrib/instrumentation may need updates when OTEL has breaking changes. [2]
- Track the Ollama/OpenTelemetry integration issue and related PRs for upstream support and fixes. [3]
Sources
- opentelemetry-instrumentation-ollama releases listing. [1]
- OpenTelemetry versioning and stability guidance (breaking-change policy for API/SDK/contrib). [2]
- Ollama GitHub issue requesting OpenTelemetry tracing support (discussion/PRs tracking integration). [3]
The version constraint >= 0.1.0 is incompatible with tool_calls testing; Ollama's tool calling API was added in version 0.3.x (July 2024).
Versions 0.1.0–0.2.x do not include the tool_calls structure at all, so this test will fail against older Ollama releases. Either constrain the version to >= 0.3.0 or add version guards in tests to skip tool_calls validation for unsupported versions.
🤖 Prompt for AI Agents
In packages/opentelemetry-instrumentation-ollama/tests/test_chat.py around line
162, the test assumes Ollama's tool_calls API exists but the dependency
constraint >= 0.1.0 allows older Ollama releases (0.1.0–0.2.x) that lack
tool_calls; update the test to either tighten the version to >= 0.3.0 or add a
runtime version check that skips tool_calls assertions when Ollama < 0.3.0
(e.g., detect ollama version and use pytest.skip or branch assertions
accordingly).
feat(instrumentation): ...orfix(instrumentation): ....Important
Extend Ollama instrumentation support to older versions by adjusting version constraints and handling missing methods.
_instrumentsin__init__.pyto support Ollama versions>= 0.1.0, < 1._instrument()in__init__.pyto conditionally wrap_copy_messagesonly if it exists, handling older versions where it might be absent.test_chat.pyto changeargumentsfrom string to dictionary intest_ollama_chat_tool_calls_legacy,test_ollama_chat_tool_calls_with_events_with_content, andtest_ollama_chat_tool_calls_with_events_with_no_content.This description was created by
for 46620ee. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.