Skip to content

docs: fix LLM reranker config examples and field names#4539

Merged
whysosaket merged 1 commit intomainfrom
fix/issue-3803-llm-reranker-docs
Mar 26, 2026
Merged

docs: fix LLM reranker config examples and field names#4539
whysosaket merged 1 commit intomainfrom
fix/issue-3803-llm-reranker-docs

Conversation

@utkarsh240799
Copy link
Copy Markdown
Contributor

Fixes #3803

Summary

The LLM reranker documentation had multiple inaccuracies that would cause users to write incorrect configurations:

Wrong config structure — All examples used a nested "llm": {"provider": ..., "config": {...}} structure as the primary/default approach, but LLMRerankerConfig expects flat fields (provider, model, api_key, etc.). While the nested llm key is supported as an optional override for providers needing extra fields (e.g., Ollama's ollama_base_url), it should not be the default pattern shown to users.

Wrong field names — Docs used custom_prompt (should be scoring_prompt), top_n (should be top_k), and score_range (does not exist in LLMRerankerConfig).

Wrong prompt template variables — Custom prompt examples used {memories}, {user_context}, {current_time}, {recent_activities}, and {context}, but the code at llm_reranker.py:123 only passes {query} and {document} to .format(). Any prompt using the old variables would raise a KeyError at runtime.

Wrong score scale — Examples used 1-10 and 1-5 scales, but the default prompt and _extract_score() regex use a 0.0-1.0 scale.

Wrong default prompt shown — The docs showed a batch-ranking prompt ("rank the memory entries", "JSON array with scores"), but the actual code scores one document at a time.

Stale model names — Examples referenced gpt-4, gpt-3.5-turbo, claude-3-sonnet-20240229, and llama2.

Changes

Across 3 files (llm_reranker.mdx, custom-prompts.mdx, reranker-search.mdx):

  • Replaced nested "llm" config with flat field structure in all examples (kept nested llm only for Ollama and Azure OpenAI examples where provider-specific fields are needed, with a <Note> explaining when to use it)
  • Renamed custom_promptscoring_prompt, top_ntop_k, removed score_range
  • Updated parameters table to match all 8 fields in LLMRerankerConfig with correct types and defaults
  • Replaced all prompt template variables with {query} and {document} (the only two supported)
  • Updated default prompt example to match the actual _get_default_prompt() output
  • Updated all score scales from 1-10 to 0.0-1.0
  • Updated prompt variables table in custom-prompts.mdx (removed {memories}, {user_id}, {user_context})
  • Removed prompt examples that relied on batch-ranking ({memories} plural) which is incompatible with the per-document scoring architecture
  • Updated model names to current versions (gpt-4o-mini, gpt-4o, claude-sonnet-4-20250514, llama3.2)

Testing

  • Verified every parameter in the docs table matches LLMRerankerConfig field names, types, and defaults (mem0/configs/rerankers/llm.py)
  • Verified default prompt text in docs is an exact copy of _get_default_prompt() (mem0/reranker/llm_reranker.py:66-78)
  • Verified {query} and {document} are the only template variables passed at self.scoring_prompt.format(query=query, document=doc_text) (mem0/reranker/llm_reranker.py:123)
  • Verified score scale 0.0-1.0 matches _extract_score() regex pattern and clamping (mem0/reranker/llm_reranker.py:80-91)
  • Verified Ollama/Azure examples correctly use flat fields + nested llm, matching the code path at mem0/reranker/llm_reranker.py:39-47
  • Grep confirmed no remaining instances of custom_prompt (as config key), top_n, score_range, {memories}, {user_context}, {current_time}, or {recent_activities} across all 3 files

🤖 Generated with Claude Code

All LLM reranker docs used a nested "llm" config structure as the
default, but LLMRerankerConfig expects flat fields (provider, model,
api_key, etc.). Also fixes wrong field names (custom_prompt →
scoring_prompt, top_n → top_k), removes non-existent score_range
param, corrects prompt template variables to {query}/{document},
and updates score scales to 0.0-1.0 to match the actual default prompt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@mintlify
Copy link
Copy Markdown
Contributor

mintlify bot commented Mar 25, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
mem0 🟢 Ready View Preview Mar 25, 2026, 2:09 PM

@whysosaket whysosaket merged commit 7fff26f into main Mar 26, 2026
4 checks passed
@whysosaket whysosaket deleted the fix/issue-3803-llm-reranker-docs branch March 26, 2026 14:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Documentation mismatch for LLM reranker config: "llm" nesting should be removed

3 participants