Implement MTP for MiniCPM Model#6899
Conversation
There was a problem hiding this comment.
Hello @yhyang201, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini here, providing a summary of this pull request to help everyone quickly understand the changes.
This PR introduces support for the Multi-Token Prediction (MTP) variant of the MiniCPM model within the SGLang framework. This is a significant addition, likely enabling speculative decoding or similar techniques by allowing a smaller, faster MiniCPM model to propose multiple tokens ahead of the main model.
The core changes involve adding the specific MTP model implementation and updating the configuration to recognize MiniCPM as a potential draft model for MTP.
Highlights
- Configuration Update: Modifies the model configuration logic to correctly identify the MiniCPM model as an MTP draft model when specified.
- Base MiniCPM Model Adjustments: Makes minor adjustments to the base MiniCPM model class, including adding a method to retrieve embedding and language model head weights and removing an explicit setting for the rotary embedding cache precision.
Changelog
Click here to see the changelog
- python/sglang/srt/configs/model_config.py
- Added logic to recognize 'MiniCPMForCausalLM' as 'MiniCPMMTP' when
is_draft_modelis true (lines 104-105).
- Added logic to recognize 'MiniCPMForCausalLM' as 'MiniCPMMTP' when
- python/sglang/srt/models/minicpm.py
- Removed explicit setting of
rotary_emb.cos_sin_cacheto fp32 (lines 141-142). - Added
get_embed_and_headmethod to retrieve embedding and lm_head weights (lines 336-337).
- Removed explicit setting of
- python/sglang/srt/models/minicpm_mtp.py
- Added a new file implementing the
MiniCPMMTPmodel. - Defined
MiniCPMMultiTokenPredictorLayerfor MTP-specific forward pass logic. - Defined the main
MiniCPMMTPclass wrapping the MTP layer, parallel LM head, and logits processor. - Implemented
load_weightsmethod with specific handling for stacked parameters and skipping embedding weights. - Added
get_embed_and_headandset_embed_and_headmethods for weight sharing. - Set
EntryClasstoMiniCPMMTP.
- Added a new file implementing the
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Tokens fly so fast,
Predicting many at once,
Speedy MiniCPM.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
The pull request implements MTP (Multi-Token Prediction) for the MiniCPM model. This introduces a new model architecture (MiniCPMMTP) and modifies existing configurations to support it. The changes seem well-structured, but let's review for potential improvements and ensure correctness.
Summary of Findings
- Missing Comment: A comment is missing explaining why a line was commented out in
python/sglang/srt/models/minicpm.py. - Missing Docstring: A docstring is missing for the
get_embed_and_headfunction inpython/sglang/srt/models/minicpm.py. - Unnecessary Print Statements: Print statements are present in the
load_weightsfunction ofpython/sglang/srt/models/minicpm_mtp.pythat should be removed.
Merge Readiness
The pull request introduces MTP support for MiniCPM, which is a valuable addition. However, there are a few issues that should be addressed before merging, particularly the missing comment and docstring, and the commented out line. I am unable to directly approve this pull request, and recommend that others review and approve this code before merging. At a minimum, the HIGH severity comments should be addressed before merging.
|
Could you check with with #8842, do we need a separate file for mtp, or we don't really need one? |
|
I think #8842 is bette, close this issue for now. |
Motivation
Implement MTP for MiniCPM Model.
Unit tests and documentation will be added after the relevant content has been released.
Many thanks to @mickqian and the @OpenBMB Team for their support.
Modifications
Checklist