Skip to content

fix(reranker): support nested llm config in LLMReranker for non-OpenAI providers#4405

Merged
whysosaket merged 2 commits intomainfrom
fix/reranker-nested-llm-config-3768
Mar 19, 2026
Merged

fix(reranker): support nested llm config in LLMReranker for non-OpenAI providers#4405
whysosaket merged 2 commits intomainfrom
fix/reranker-nested-llm-config-3768

Conversation

@kartik-mem0
Copy link
Copy Markdown
Contributor

@kartik-mem0 kartik-mem0 commented Mar 18, 2026

Summary

  • Fix LLMRerankerConfig to accept a nested llm dict config
  • Fix LLMReranker to forward provider-specific settings (e.g. ollama_base_url) from nested config to LlmFactory

Fixes #3768

Problem

When configuring llm_reranker with a nested LLM provider like Ollama:

"reranker": {
    "provider": "llm_reranker",
    "config": {
        "llm": {
            "provider": "ollama",
            "config": {
                "model": "dengcao/Qwen3-Reranker-0.6B:F16",
                "ollama_base_url": "http://localhost:11434"
            }
        }
    }
}

The nested provider/model/URL were silently ignored because:

  1. LLMRerankerConfig had no llm field — Pydantic silently dropped the dict
  2. LLMReranker.__init__ hardcoded only 4 fields (model, temperature, max_tokens, api_key)
  3. Provider-specific params like ollama_base_url were never forwarded
  4. Everything fell back to provider="openai" / model="gpt-4o-mini", causing 401 errors

Changes

  1. mem0/configs/rerankers/llm.py — Added llm: Optional[Dict[str, Any]] field to accept nested LLM config
  2. mem0/reranker/llm_reranker.py — When config.llm is provided, use nested provider and config dict (including ollama_base_url, model, etc.) to create the LLM via LlmFactory.create(). Falls back to original flat-field behavior when no nested dict is present.

Test plan

  • Configure llm_reranker with nested Ollama config — reranker uses Ollama instead of OpenAI
  • Configure llm_reranker with flat config (no nested llm) — backward compatibility preserved
  • Provider-specific settings like ollama_base_url forwarded correctly
  • model, temperature, max_tokens, api_key defaults applied symmetrically in both paths

@whysosaket
Copy link
Copy Markdown
Member

Please add tests.

@whysosaket whysosaket merged commit 348f44b into main Mar 19, 2026
8 checks passed
jamebobob pushed a commit to jamebobob/mem0-vigil-recall that referenced this pull request Mar 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

llm_reranker provider use ollama config error

2 participants