Enable memory saver for hybrid model#11974
Conversation
Summary of ChangesHello @ocss884, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a memory saving mechanism for the hybrid model, specifically targeting the Mamba state and KV cache allocations. By introducing an Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enables the memory saver feature for hybrid models. The changes primarily involve propagating the enable_memory_saver flag through various components, including MambaPool, HybridReqToTokenPool, and HybridLinearKVPool. The flag is then used to wrap memory-intensive buffer allocations with the TorchMemorySaverAdapter, which is the intended behavior. The implementation appears correct and consistent. I've found one minor issue regarding some leftover commented-out code that should be cleaned up.
| # def _create_buffers(self): | ||
| # with self.memory_saver_adapter.region(GPU_MEMORY_TYPE_KV_CACHE): | ||
|
|
fzyzcjy
left a comment
There was a problem hiding this comment.
LGTM if it is just adding with blocks and test passes
fzyzcjy
left a comment
There was a problem hiding this comment.
LGTM reading the new diff
|
@ocss884 It's breaking H200 test, please fix it. https://github.com/sgl-project/sglang/actions/runs/19082581304/job/54518240502 |
Motivation
Modifications
Accuracy Tests
Benchmarking and Profiling
Checklist