[PD] Refactor parallel sizes and add pp support for mooncake#8571
[PD] Refactor parallel sizes and add pp support for mooncake#8571zhyncs merged 8 commits intosgl-project:mainfrom
Conversation
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
There was a problem hiding this comment.
Summary of Changes
Hello @ShangmingCai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the Mooncake disaggregation system by introducing comprehensive support for Pipeline Parallelism (PP) and undertaking a substantial refactoring of how parallel sizes are managed. The changes aim to provide more explicit control over tensor and data parallelism configurations, particularly for attention mechanisms, and enable multi-stage model execution across distributed environments.
Highlights
- Pipeline Parallelism (PP) Integration: Added comprehensive support for Pipeline Parallelism (PP) within the Mooncake disaggregation framework. This includes introducing
pp_rankandpp_sizeparameters across key components likeKVArgs,MooncakePrefillManager, and the bootstrap server, enabling multi-stage model execution. - Refactored Parallel Size Management: Standardized the handling of parallel sizes by explicitly using
attn_tp_size(attention tensor parallelism size) andattn_dp_sizeinstead of implicitly derived values. This provides clearer definitions and usage of tensor and data parallelism within the system. - Enhanced Data Parallelism (DP) Support: Improved the system's flexibility for data parallelism by removing previous constraints and allowing for more robust DP configurations, especially when
enable_dp_attentionis disabled. The system now differentiates betweensystem_dp_sizeandattn_dp_size. - Updated KV Cache Transfer Logic: Modified the KV cache transfer mechanism (
send_kvcache,send_kvcache_slice) to correctly distribute and slice KV cache layers across different pipeline parallel stages, ensuring data consistency and efficient transfer in a PP setup for both MLA and non-MLA backends. - Bootstrap Server Protocol Update: The bootstrap server's communication protocol has been updated to register and retrieve prefill instances using the refined parallel size parameters (
attn_tp_size,attn_dp_size,pp_size,pp_rank,system_dp_size,system_dp_rank), allowing for more granular and accurate coordination of distributed inference.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
There was a problem hiding this comment.
Code Review
This pull request refactors the parallel size handling and adds Pipeline Parallelism (PP) support for Mooncake. The changes are extensive and touch upon the core logic of disaggregation in SGLang. The refactoring to use attn_tp_size directly is a good simplification. The addition of PP support is a significant feature enhancement.
I've identified a few critical and high-severity issues that need to be addressed to ensure correctness and prevent runtime errors. These include a NameError due to an undefined variable in a log message, an incorrect return type hint and value in a function, and a missing check for a required parameter in an API endpoint.
Once these issues are resolved, the PR should be in good shape.
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
fzyzcjy
left a comment
There was a problem hiding this comment.
(only review the dp_size related part)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
LGTM |
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
…ject#8571) Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
…ject#8571) Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Motivation
CC: @fzyzcjy
Modifications
Accuracy Test
Benchmark & Profiling
Checklist