fix(ollama): pass tools to client.chat and parse tool_calls from response#4176
Merged
kartik-mem0 merged 2 commits intomem0ai:mainfrom Mar 18, 2026
Merged
fix(ollama): pass tools to client.chat and parse tool_calls from response#4176kartik-mem0 merged 2 commits intomem0ai:mainfrom
kartik-mem0 merged 2 commits intomem0ai:mainfrom
Conversation
…onse OllamaLLM.generate_response() never forwarded the tools parameter to ollama.Client.chat(), and _parse_response() hard-coded tool_calls to an empty list. This caused graph memory entity extraction to silently return zero results when using Ollama as the LLM provider. - Forward tools to client.chat() when provided - Parse tool_calls from response (supports both dict and object formats) - Handle string arguments via json.loads Fixes mem0ai#4175 Signed-off-by: sxu75374 <imshuaixu@gmail.com> Signed-off-by: Small <imshuaixu@gmail.com>
Contributor
Author
|
Just signed the CLA — should be good to go now! 👍 |
kartik-mem0
requested changes
Mar 18, 2026
Contributor
There was a problem hiding this comment.
@sxu75374, Thanks for the PR! The fix correctly addresses the issue — just one small change needed.
Please use extract_json() before json.loads() to match the pattern in all other LLM providers (openai.py, groq.py, together.py, lmstudio.py):
from mem0.memory.utils import extract_json
Change line 82:
arguments = json.loads(extract_json(arguments))
This strips markdown code fences that smaller models sometimes wrap around JSON output. Without it, parsing will crash in those cases.
kartik-mem0
approved these changes
Mar 18, 2026
Contributor
|
thank you for your contribution @sxu75374! there was a slight comment which i resolved rest of the pr lgtm! |
jamebobob
pushed a commit
to jamebobob/mem0-vigil-recall
that referenced
this pull request
Mar 29, 2026
…onse (mem0ai#4176) Signed-off-by: sxu75374 <imshuaixu@gmail.com> Signed-off-by: Small <imshuaixu@gmail.com> Co-authored-by: kartik-mem0 <kartik.labhshetwar@mem0.ai>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
OllamaLLM.generate_response()never forwarded thetoolsparameter toollama.Client.chat(), and_parse_response()hard-codedtool_callsto an empty list. This caused graph memory entity extraction to silently return zero results when using Ollama as the LLM provider.Ollama has supported tool calling since mid-2024.
Changes
mem0/llms/ollama.py: Forwardtoolstoclient.chat()when provided; parsetool_callsfrom response (supports both dict and object formats); handle string arguments viajson.loadstests/llms/test_ollama.py: Add 5 tests covering tool passing, empty tool_calls, string arguments, and object-style responsesTesting
All 5 new tests pass. Existing test unchanged and still passes.
Fixes #4175
Signed-off-by: sxu75374 imshuaixu@gmail.com