feat: add MiniMax LLM provider (#4132)#4431
Conversation
kartik-mem0
left a comment
There was a problem hiding this comment.
Hey @Himanshu-Sangshetti, thanks for the contribution! The code structure is clean and follows existing provider patterns well. Found a couple of issues that need fixing before this can be merged:
1. Wrong default base URL (critical)
The default URL https://api.minimaxi.io/v1 uses the old minimaxi.io domain which no longer resolves in DNS:
$ curl -sv "https://api.minimaxi.io/v1"
* Could not resolve host: api.minimaxi.io
MiniMax rebranded — the correct OpenAI-compatible endpoint per official docs is:
https://api.minimax.io/v1
Since users don't specify a URL in typical usage (Memory.from_config with just provider + api_key), this means the integration is broken out of the box. Please update both the default in mem0/llms/minimax.py and the assertions in tests/llms/test_minimax.py.
2. response_format not passed through (critical)
generate_response accepts response_format but never forwards it to the API call. Mem0's memory pipeline calls this with response_format={"type": "json_object"} in multiple places (mem0/memory/main.py). Without it, memory extraction will produce unparseable responses.
Fix — add before the if tools: block:
if response_format:
params["response_format"] = response_formatEvery other OpenAI-compatible provider (OpenAI, Groq, Together, XAI) does this. Looks like this was inherited from the DeepSeek provider which has the same gap.
3. Minor improvements
- Add a test for
response_formatpassthrough — it's the most critical param for Mem0's memory flow - Consider using
monkeypatchinstead ofos.environ+try/finallyin tests - Latest model is
MiniMax-M2.7per official docs —M2.1works but you might want to update the default
Happy to re-review once the URL and response_format fixes are in!
|
One more thing — the actual user-facing docs are missing:
The |
56da54e to
839352b
Compare
|
Hey @kartik-mem0, thanks for the thorough review! All points have been addressed in the latest 1. Wrong default base URL 2. response_format not passed through if response_format:
params["response_format"] = response_formatAlso added a dedicated test test_generate_response_with_response_format that verifies response_format={"type": "json_object"} is correctly forwarded to the API call. 3. Minor improvements
4. Missing user-facing docs
Also did a quick E2E test locally — the integration correctly hits https://api.minimax.io/v1, the |
Co-authored-by: kartik-mem0 <kartik.labhshetwar@mem0.ai>
Description
Adds MiniMax as a native LLM provider in Mem0 (Python) to use MiniMax models for memory extraction with
provider: "minimax"config.Motivation: Users need first-class support without LiteLLM workarounds.
Changes:
mem0/configs/llms/minimax.py-MinimaxConfig(model, api_key, minimax_base_url)mem0/llms/minimax.py-MiniMaxLLMusing OpenAI SDK +base_url="https://api.minimaxi.io/v1"mem0/utils/factory.py- Register"minimax"providermem0/llms/configs.py- Add"minimax"to provider whitelisttests/llms/test_minimax.py- 8 unit testsLLM.md- Add MiniMax to provider listDependencies: None. Uses existing
openaiSDK with custombase_url.Fixes #4132
Type of change
How Has This Been Tested?
Unit tests:
pytest tests/llms/test_minimax.py -vTest coverage:
https://api.minimaxi.io/v1)minimax_base_urlconfigMINIMAX_API_BASEenvMiniMax-M2.1when not specifiedMINIMAX_API_KEYenv when not in configgenerate_responsewithout tools (returns text)generate_responsewith tools (returns tool_calls)LlmFactory.create("minimax", {...})returnsMiniMaxLLMReproduce:
Checklist:
Maintainer Checklist