Bug
addToVectorStore iterates over facts extracted by the LLM and passes each one directly to this.embedder.embed(fact) without validating that the value is a string. When the LLM returns a fact as an object (e.g. {"fact": "some text"} instead of "some text"), Ollama's Go server rejects it:
ResponseError: json: cannot unmarshal object into Go struct field EmbeddingRequest.prompt of type string
Root cause
In addToVectorStore (roughly):
for (const fact of facts) {
const embedding = await this.embedder.embed(fact); // fact may be an object!
newMessageEmbeddings[fact] = embedding;
}
facts is parsed from LLM JSON output. The LLM is an untrusted source — it can return any JSON shape. With less instruction-following models (local models like llama3.1:8b, mistral, etc.), the response may be:
[{"fact": "User prefers dark mode"}, {"fact": "User is named Alice"}]
instead of the expected:
["User prefers dark mode", "User is named Alice"]
This non-string object is then passed directly to OllamaEmbedder.embed(), which does:
const response = await this.ollama.embeddings({ model: this.model, prompt: text });
Ollama's Go endpoint type-checks prompt as string, so passing an object causes an unmarshal error.
Impact
memory.add() throws on every call when using local/small LLMs
- Memory capture (autoCapture in any integration) silently fails or crashes
- No useful error message — the root cause (LLM returned wrong type) is buried
- Only models that reliably follow strict JSON schemas (GPT-4 class) avoid this in practice
Steps to reproduce
- Use
llama3.1:8b (or any local model) as the LLM provider with Ollama
- Call
memory.add([...], { userId: "test" }) with a complex or multi-turn conversation
- Observe
ResponseError: json: cannot unmarshal object into Go struct field EmbeddingRequest.prompt of type string
Expected behavior
addToVectorStore should validate that each fact is a string before passing it to the embedder. If a fact is an object, extract its text value or convert to string rather than crashing.
Suggested fix
for (const rawFact of facts) {
const fact = typeof rawFact === "string" ? rawFact
: (rawFact && typeof rawFact === "object" && "fact" in rawFact) ? String(rawFact.fact)
: JSON.stringify(rawFact ?? "");
const embedding = await this.embedder.embed(fact);
newMessageEmbeddings[fact] = embedding;
}
Similarly, OllamaEmbedder.embed() should defensively coerce its input to string.
Environment
- mem0ai version: latest (tested against
dist/oss/index.mjs)
- LLM:
ollama provider with llama3.1:8b
- Embedder:
ollama provider with nomic-embed-text
- Node.js OSS bindings
Bug
addToVectorStoreiterates over facts extracted by the LLM and passes each one directly tothis.embedder.embed(fact)without validating that the value is a string. When the LLM returns a fact as an object (e.g.{"fact": "some text"}instead of"some text"), Ollama's Go server rejects it:Root cause
In
addToVectorStore(roughly):factsis parsed from LLM JSON output. The LLM is an untrusted source — it can return any JSON shape. With less instruction-following models (local models likellama3.1:8b,mistral, etc.), the response may be:[{"fact": "User prefers dark mode"}, {"fact": "User is named Alice"}]instead of the expected:
This non-string object is then passed directly to
OllamaEmbedder.embed(), which does:Ollama's Go endpoint type-checks
promptasstring, so passing an object causes an unmarshal error.Impact
memory.add()throws on every call when using local/small LLMsSteps to reproduce
llama3.1:8b(or any local model) as the LLM provider with Ollamamemory.add([...], { userId: "test" })with a complex or multi-turn conversationResponseError: json: cannot unmarshal object into Go struct field EmbeddingRequest.prompt of type stringExpected behavior
addToVectorStoreshould validate that each fact is a string before passing it to the embedder. If a fact is an object, extract its text value or convert to string rather than crashing.Suggested fix
Similarly,
OllamaEmbedder.embed()should defensively coerce its input to string.Environment
dist/oss/index.mjs)ollamaprovider withllama3.1:8bollamaprovider withnomic-embed-text