fix: use root LLM config as fallback for graph store instead of hardcoded OpenAI default#4466
Merged
whysosaket merged 1 commit intomainfrom Mar 21, 2026
Merged
Conversation
…oded OpenAI default (#3425) The graph store ignored the user's root-level LLM config and always used a hardcoded OpenAI default. This fixes both the default override and a provider/config mismatch where the provider could come from graphStore.llm but the config always came from the root LLM. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2899d7a to
a60ada9
Compare
7 tasks
kartik-mem0
approved these changes
Mar 21, 2026
whysosaket
approved these changes
Mar 21, 2026
11 tasks
jamebobob
pushed a commit
to jamebobob/mem0-vigil-recall
that referenced
this pull request
Mar 29, 2026
…oded OpenAI default (mem0ai#4466) Co-authored-by: utkarsh240799 <utkarsh240799@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
If the user provides a root-level
llmconfig (e.g. Anthropic), the graph store ignores it and uses the hardcoded default OpenAI config instead. The user-provided LLM config only works for the graph store if it is redundantly specified undergraphStore.llm.Root cause (two problems in
graph_memory.ts):Default config override —
defaults.tshardcodedgraphStore.llmtoopenai/gpt-4-turbo-preview. SinceConfigManager.mergeConfigshallow-spreads defaults first, this always overrode the user's rootllm.provider.Provider/config mismatch — The constructor resolved the LLM provider with correct precedence (
graphStore.llm→ rootllm→"openai"), but always passedthis.config.llm.config(root config) toLLMFactory.create, ignoringgraphStore.llm.configentirely.Fix:
graphStore.llmfromdefaults.tsso the rootllmnaturally becomes the fallback.graph_memory.tsfollow the same precedence as the provider resolution: usegraphStore.llm.configwhen present, fall back to rootllm.config.Fixes #3425
Type of change
How Has This Been Tested?
Added 7 new tests across two test files:
graph-memory-parsing.test.ts— 4 unit tests forMemoryGraphconstructor (mocked LLMFactory):llmconfig propagates when nographStore.llmis setgraphStore.llmoverrides rootllm(both provider and config)graphStore.llm.configis undefined"openai"when no provider is set anywhereconfig-manager.test.ts— 3 end-to-end tests throughConfigManager.mergeConfig:graphStore.llmexists after merge (rootllmis the fallback)graphStore.llmis preserved when user provides itgraphStore.llmwhen user doesn't provide oneFull test suite results (511 tests across 33 suites — all pass):
graph-memory-parsing.test.ts— 31 passedconfig-manager.test.ts— 26 passedgraph-prompts.test.ts— 21 passedmemory.init.test.ts— 5 passedmemory.add.test.ts— 10 passedmemory.crud.test.ts— 22 passedChecklist:
Maintainer Checklist
🤖 Generated with Claude Code