docs: fix LLM reranker config examples and field names#4539
Merged
whysosaket merged 1 commit intomainfrom Mar 26, 2026
Merged
Conversation
All LLM reranker docs used a nested "llm" config structure as the
default, but LLMRerankerConfig expects flat fields (provider, model,
api_key, etc.). Also fixes wrong field names (custom_prompt →
scoring_prompt, top_n → top_k), removes non-existent score_range
param, corrects prompt template variables to {query}/{document},
and updates score scales to 0.0-1.0 to match the actual default prompt.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
|
Preview deployment for your docs. Learn more about Mintlify Previews.
|
This was referenced Mar 25, 2026
kartik-mem0
approved these changes
Mar 25, 2026
whysosaket
approved these changes
Mar 26, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #3803
Summary
The LLM reranker documentation had multiple inaccuracies that would cause users to write incorrect configurations:
Wrong config structure — All examples used a nested
"llm": {"provider": ..., "config": {...}}structure as the primary/default approach, butLLMRerankerConfigexpects flat fields (provider,model,api_key, etc.). While the nestedllmkey is supported as an optional override for providers needing extra fields (e.g., Ollama'sollama_base_url), it should not be the default pattern shown to users.Wrong field names — Docs used
custom_prompt(should bescoring_prompt),top_n(should betop_k), andscore_range(does not exist inLLMRerankerConfig).Wrong prompt template variables — Custom prompt examples used
{memories},{user_context},{current_time},{recent_activities}, and{context}, but the code atllm_reranker.py:123only passes{query}and{document}to.format(). Any prompt using the old variables would raise aKeyErrorat runtime.Wrong score scale — Examples used 1-10 and 1-5 scales, but the default prompt and
_extract_score()regex use a 0.0-1.0 scale.Wrong default prompt shown — The docs showed a batch-ranking prompt (
"rank the memory entries","JSON array with scores"), but the actual code scores one document at a time.Stale model names — Examples referenced
gpt-4,gpt-3.5-turbo,claude-3-sonnet-20240229, andllama2.Changes
Across 3 files (
llm_reranker.mdx,custom-prompts.mdx,reranker-search.mdx):"llm"config with flat field structure in all examples (kept nestedllmonly for Ollama and Azure OpenAI examples where provider-specific fields are needed, with a<Note>explaining when to use it)custom_prompt→scoring_prompt,top_n→top_k, removedscore_rangeLLMRerankerConfigwith correct types and defaults{query}and{document}(the only two supported)_get_default_prompt()outputcustom-prompts.mdx(removed{memories},{user_id},{user_context}){memories}plural) which is incompatible with the per-document scoring architecturegpt-4o-mini,gpt-4o,claude-sonnet-4-20250514,llama3.2)Testing
LLMRerankerConfigfield names, types, and defaults (mem0/configs/rerankers/llm.py)_get_default_prompt()(mem0/reranker/llm_reranker.py:66-78){query}and{document}are the only template variables passed atself.scoring_prompt.format(query=query, document=doc_text)(mem0/reranker/llm_reranker.py:123)_extract_score()regex pattern and clamping (mem0/reranker/llm_reranker.py:80-91)llm, matching the code path atmem0/reranker/llm_reranker.py:39-47custom_prompt(as config key),top_n,score_range,{memories},{user_context},{current_time}, or{recent_activities}across all 3 files🤖 Generated with Claude Code