Bug
mem0/llms/ollama.py never passes the tools parameter to self.client.chat(), and _parse_response() hard-codes tool_calls to an empty list with the comment:
# Ollama doesn't support tool calls in the same way, so we return the content
This means graph memory entity extraction silently produces zero results when using Ollama as the LLM provider. The _retrieve_nodes_from_data() method in graph_memory.py sends tools to the LLM, but the Ollama wrapper drops them.
Ollama has supported tool calling since mid-2024.
Steps to Reproduce
from mem0 import Memory
config = {
"graph_store": {
"provider": "neo4j",
"config": {"url": "bolt://localhost:7687", "username": "neo4j", "password": "test"},
},
"llm": {
"provider": "ollama",
"config": {"model": "ministral-3:3b", "temperature": 0.1},
},
"embedder": {
"provider": "ollama",
"config": {"model": "nomic-embed-text"},
},
"vector_store": {
"provider": "qdrant",
"config": {"collection_name": "test", "path": "/tmp/test_qdrant", "embedding_model_dims": 768},
},
}
m = Memory.from_config(config_dict=config)
result = m.add("Alice teaches CSE 151A at UCSD. DRSN sells on Amazon.", user_id="test", infer=True)
print(result["relations"])
# {'deleted_entities': [], 'added_entities': []} <-- always empty
Root Cause
In mem0/llms/ollama.py:
1. generate_response() — tools never passed to Ollama:
# Current code (line ~95):
response = self.client.chat(**params)
# `tools` is a parameter but never added to `params`
2. _parse_response() — tool_calls hard-coded to empty:
# Current code (line ~50):
if tools:
processed_response = {
"content": content,
"tool_calls": [], # <-- always empty
}
# Ollama doesn't support tool calls in the same way, so we return the content
return processed_response
Fix
# 1. In generate_response(), before client.chat():
if tools:
params["tools"] = tools
response = self.client.chat(**params)
# 2. In _parse_response(), parse actual tool_calls:
if tools:
processed_response = {"content": content, "tool_calls": []}
if isinstance(response, dict):
raw_calls = response.get("message", {}).get("tool_calls", [])
else:
raw_calls = getattr(response.message, "tool_calls", None) or []
for tc in raw_calls:
if isinstance(tc, dict):
fn = tc.get("function", {})
name = fn.get("name", "")
args = fn.get("arguments", {})
else:
fn = getattr(tc, "function", None)
name = getattr(fn, "name", "") if fn else ""
args = getattr(fn, "arguments", {}) if fn else {}
processed_response["tool_calls"].append({"name": name, "arguments": args})
return processed_response
After Fix
With the patch applied, ministral-3:3b correctly extracts 6 entities:
added_entities: [
{source: 'cse_151a', relationship: 'is_course', target: 'ucsd'},
{source: 'ucsd', relationship: 'is_affiliated_with', target: 'alice'},
{source: 'drsn', relationship: 'sells', target: 'kitchen_products'},
{source: 'amazon_us_marketplace', relationship: 'hosts', target: 'drsn'},
...
]
Environment
- mem0ai version: latest pip (installed 2026-03-01)
- Ollama: v0.9.x
- Models tested: ministral-3:3b, ministral-3:8b, qwen3:4b (all support tool calling per https://ollama.com/search?c=tools)
- OS: macOS (Apple Silicon)
Bug
mem0/llms/ollama.pynever passes thetoolsparameter toself.client.chat(), and_parse_response()hard-codestool_callsto an empty list with the comment:# Ollama doesn't support tool calls in the same way, so we return the contentThis means graph memory entity extraction silently produces zero results when using Ollama as the LLM provider. The
_retrieve_nodes_from_data()method ingraph_memory.pysends tools to the LLM, but the Ollama wrapper drops them.Ollama has supported tool calling since mid-2024.
Steps to Reproduce
Root Cause
In
mem0/llms/ollama.py:1.
generate_response()— tools never passed to Ollama:2.
_parse_response()— tool_calls hard-coded to empty:Fix
After Fix
With the patch applied,
ministral-3:3bcorrectly extracts 6 entities:Environment