Skip to content

feat: add reasoning_effort parameter support for reasoning models#4461

Merged
kartik-mem0 merged 1 commit intomem0ai:mainfrom
GopalGB:feat/reasoning-effort-param-3651
Mar 27, 2026
Merged

feat: add reasoning_effort parameter support for reasoning models#4461
kartik-mem0 merged 1 commit intomem0ai:mainfrom
GopalGB:feat/reasoning-effort-param-3651

Conversation

@GopalGB
Copy link
Copy Markdown
Contributor

@GopalGB GopalGB commented Mar 21, 2026

Summary

  • Adds reasoning_effort parameter ("low", "medium", "high") to BaseLlmConfig, OpenAIConfig, and AzureOpenAIConfig
  • Passes reasoning_effort through to the API for reasoning models (o1, o3, gpt-5 series) via _get_supported_params in LLMBase
  • Propagates reasoning_effort when converting BaseLlmConfig to provider-specific configs in OpenAILLM and AzureOpenAILLM

Fixes #3651

Changes

File Change
mem0/configs/llms/base.py Add reasoning_effort param to BaseLlmConfig.__init__
mem0/configs/llms/openai.py Add reasoning_effort param to OpenAIConfig.__init__, pass to super
mem0/configs/llms/azure.py Add reasoning_effort param to AzureOpenAIConfig.__init__, pass to super
mem0/llms/base.py Include reasoning_effort in _get_supported_params for reasoning models
mem0/llms/openai.py Propagate reasoning_effort in BaseLlmConfig -> OpenAIConfig conversion
mem0/llms/azure_openai.py Propagate reasoning_effort in BaseLlmConfig -> AzureOpenAIConfig conversion
tests/llms/test_openai.py 4 new tests for reasoning_effort with OpenAI
tests/llms/test_azure_openai.py 3 new tests for reasoning_effort with Azure OpenAI

Test plan

  • All 15 existing tests pass unchanged
  • 7 new tests verify:
    • reasoning_effort is sent to API when configured on reasoning models
    • reasoning_effort is NOT sent when not configured (None)
    • reasoning_effort is NOT sent for non-reasoning models
    • All valid values ("low", "medium", "high") are accepted
    • AzureOpenAIConfig no longer raises TypeError with reasoning_effort kwarg

Add `reasoning_effort` parameter ("low", "medium", "high") to
BaseLlmConfig, OpenAIConfig, and AzureOpenAIConfig. When set and
using a reasoning model (o1, o3, gpt-5 series), the parameter is
passed through to the API call via `_get_supported_params`.

This fixes the TypeError raised when passing `reasoning_effort`
in the llm.config section (e.g., via Memory.from_config).

Fixes mem0ai#3651
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Mar 21, 2026

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for reasoning_effort parameter for reasoning models in AzureOpenAIConfig

3 participants