Skip to content

fix(ollama): pass tools to client.chat and parse tool_calls from response#4176

Merged
kartik-mem0 merged 2 commits intomem0ai:mainfrom
sxu75374:fix/ollama-tool-calling
Mar 18, 2026
Merged

fix(ollama): pass tools to client.chat and parse tool_calls from response#4176
kartik-mem0 merged 2 commits intomem0ai:mainfrom
sxu75374:fix/ollama-tool-calling

Conversation

@sxu75374
Copy link
Copy Markdown
Contributor

@sxu75374 sxu75374 commented Mar 1, 2026

Summary

OllamaLLM.generate_response() never forwarded the tools parameter to ollama.Client.chat(), and _parse_response() hard-coded tool_calls to an empty list. This caused graph memory entity extraction to silently return zero results when using Ollama as the LLM provider.

Ollama has supported tool calling since mid-2024.

Changes

  • mem0/llms/ollama.py: Forward tools to client.chat() when provided; parse tool_calls from response (supports both dict and object formats); handle string arguments via json.loads
  • tests/llms/test_ollama.py: Add 5 tests covering tool passing, empty tool_calls, string arguments, and object-style responses

Testing

All 5 new tests pass. Existing test unchanged and still passes.

tests/llms/test_ollama.py::test_generate_response_without_tools PASSED
tests/llms/test_ollama.py::test_generate_response_with_tools_passes_tools_to_client PASSED
tests/llms/test_ollama.py::test_generate_response_with_tools_no_tool_calls_in_response PASSED
tests/llms/test_ollama.py::test_generate_response_with_tools_string_arguments PASSED
tests/llms/test_ollama.py::test_parse_response_with_tools_object_style PASSED

Fixes #4175

Signed-off-by: sxu75374 imshuaixu@gmail.com

…onse

OllamaLLM.generate_response() never forwarded the tools parameter to
ollama.Client.chat(), and _parse_response() hard-coded tool_calls to an
empty list. This caused graph memory entity extraction to silently return
zero results when using Ollama as the LLM provider.

- Forward tools to client.chat() when provided
- Parse tool_calls from response (supports both dict and object formats)
- Handle string arguments via json.loads

Fixes mem0ai#4175

Signed-off-by: sxu75374 <imshuaixu@gmail.com>
Signed-off-by: Small <imshuaixu@gmail.com>
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Mar 1, 2026

CLA assistant check
All committers have signed the CLA.

@sxu75374
Copy link
Copy Markdown
Contributor Author

sxu75374 commented Mar 1, 2026

Just signed the CLA — should be good to go now! 👍

Copy link
Copy Markdown
Contributor

@kartik-mem0 kartik-mem0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sxu75374, Thanks for the PR! The fix correctly addresses the issue — just one small change needed.
Please use extract_json() before json.loads() to match the pattern in all other LLM providers (openai.py, groq.py, together.py, lmstudio.py):

from mem0.memory.utils import extract_json

Change line 82:
arguments = json.loads(extract_json(arguments))

This strips markdown code fences that smaller models sometimes wrap around JSON output. Without it, parsing will crash in those cases.

@kartik-mem0 kartik-mem0 self-requested a review March 18, 2026 13:35
@kartik-mem0
Copy link
Copy Markdown
Contributor

thank you for your contribution @sxu75374!

there was a slight comment which i resolved rest of the pr lgtm!

@kartik-mem0 kartik-mem0 merged commit d7a34c2 into mem0ai:main Mar 18, 2026
8 checks passed
jamebobob pushed a commit to jamebobob/mem0-vigil-recall that referenced this pull request Mar 29, 2026
…onse (mem0ai#4176)

Signed-off-by: sxu75374 <imshuaixu@gmail.com>
Signed-off-by: Small <imshuaixu@gmail.com>
Co-authored-by: kartik-mem0 <kartik.labhshetwar@mem0.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OllamaLLM never passes tools to Ollama — graph entity extraction silently fails

3 participants