Skip to content

Add MiniMax-M2.5 model support#12835

Merged
neubig merged 1 commit intomainfrom
add-minimax-m2.5
Feb 11, 2026
Merged

Add MiniMax-M2.5 model support#12835
neubig merged 1 commit intomainfrom
add-minimax-m2.5

Conversation

@neubig
Copy link
Copy Markdown
Contributor

@neubig neubig commented Feb 11, 2026

Summary

Add MiniMax-M2.5 model support to the OpenHands GUI.

Changes

  • openhands/utils/llm.py: Added openhands/minimax-m2.5 to the openhands_models list so it appears in the backend API
  • frontend/src/utils/verified-models.ts: Added minimax-m2.5 to both VERIFIED_MODELS and VERIFIED_OPENHANDS_MODELS lists

This enables MiniMax-M2.5 to appear in the model selector dropdown in the GUI.

@neubig can click here to continue refining the PR


To run this PR locally, use the following command:

GUI with Docker:

docker run -it --rm   -p 3000:3000   -v /var/run/docker.sock:/var/run/docker.sock   --add-host host.docker.internal:host-gateway   -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.openhands.dev/openhands/runtime:28a9a41-nikolaik   --name openhands-app-28a9a41   docker.openhands.dev/openhands/openhands:28a9a41

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 11, 2026

Coverage report

Click to see where and how coverage changed

FileStatementsMissingCoverageCoverage
(new stmts)
Lines missing
  openhands/utils
  llm.py 113
Project Total  

This report was generated by python-coverage-comment-action

@neubig neubig force-pushed the add-minimax-m2.5 branch 2 times, most recently from a0b6c7d to ce65293 Compare February 11, 2026 16:30
@neubig neubig marked this pull request as ready for review February 11, 2026 16:31
@neubig neubig requested a review from xingyaoww February 11, 2026 16:31
@neubig neubig force-pushed the add-minimax-m2.5 branch 2 times, most recently from d990079 to efdab3b Compare February 11, 2026 16:36
Copy link
Copy Markdown
Collaborator

@all-hands-bot all-hands-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This PR has critical issues that must be addressed before merging.

🔴 Critical Issues

  1. Undisclosed Breaking Changes: The PR description claims to only add MiniMax-M2.5 support, but the changes actually remove dozens of existing models including:

    • OpenAI models: o3, o4-mini, gpt-4o, gpt-4o-mini, gpt-4.1, codex-mini-latest
    • Anthropic models: claude-3-5-sonnet-20240620, claude-3-5-sonnet-20241022, claude-3-7-sonnet, claude-opus-4-20250514, etc.
    • Gemini models: gemini-2.5-pro
    • Mistral models: devstral-small-2505, devstral-small-2507

    This is a major breaking change that will prevent users from selecting these models in the GUI.

  2. Inconsistent Naming Convention: The new model is named MiniMax-M2.5 (PascalCase with capital M), but all other models in the codebase use lowercase (e.g., gpt-5.2, claude-opus-4-5-20251101, gemini-3-pro-preview). This inconsistency needs verification with the LiteLLM API.

🟠 Important Issues

  1. Frontend-Backend Mismatch: Some models in VERIFIED_MODELS (e.g., deepseek-chat) don't have corresponding entries in the backend openhands_models list, which will cause issues.

Required Actions

Option A: Update this PR to ONLY add MiniMax-M2.5 without removing other models
Option B: Update the PR description to clearly explain why all these models are being removed and get explicit approval for this breaking change

Please also:

  • Verify the correct casing for MiniMax-M2.5 with the LiteLLM provider
  • Ensure frontend and backend model lists are consistent
  • Add tests to verify the new model works

Comment thread frontend/src/utils/verified-models.ts
Comment thread frontend/src/utils/verified-models.ts
Comment thread frontend/src/utils/verified-models.ts
Comment thread openhands/utils/llm.py
'openhands/gpt-5-2025-08-07',
'openhands/gpt-5-mini-2025-08-07',
'openhands/claude-opus-4-20250514',
'openhands/claude-opus-4-5-20251101',
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 Important: Same casing issue here - MiniMax-M2.5 uses PascalCase with capital "M", which is inconsistent with other models in the list that use lowercase.

Suggested change
'openhands/claude-opus-4-5-20251101',
'openhands/minimax-m2.5',

Verify the correct casing with the LiteLLM provider configuration.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verified: The LLM proxy at llm-proxy.app.all-hands.dev returns the model as MiniMax-M2.5 (with capital M's). This capitalization matches the actual API response.

Add MiniMax-M2.5 and update model names to match LLM proxy:
- openhands/utils/llm.py: openhands_models list for backend API
- frontend/src/utils/verified-models.ts: All verified model lists

Featured models (matching LLM proxy at llm-proxy.app.all-hands.dev):
1. claude-opus-4-5-20251101
2. claude-sonnet-4-5-20250929
3. gpt-5.2-codex
4. gpt-5.2
5. MiniMax-M2.5
6. gemini-3-pro-preview
7. gemini-3-flash-preview
8. deepseek-chat
9. devstral-medium-2512
10. kimi-k2-0711-preview
11. qwen3-coder-480b

Changes:
- Added MiniMax-M2.5 (capitalization matches LLM proxy response)
- Removed older model versions from VERIFIED_MODELS (user requested cleanup)
- Kept older models in VERIFIED_OPENAI_MODELS/VERIFIED_ANTHROPIC_MODELS for
  LiteLLM provider detection compatibility

Co-authored-by: openhands <openhands@all-hands.dev>
@neubig neubig merged commit 736f5b2 into main Feb 11, 2026
26 checks passed
@neubig neubig deleted the add-minimax-m2.5 branch February 11, 2026 16:57
tofarr pushed a commit that referenced this pull request Feb 12, 2026
Co-authored-by: openhands <openhands@all-hands.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants