Skip to content

[PD] Refactor parallel sizes and add pp support for mooncake#8571

Merged
zhyncs merged 8 commits intosgl-project:mainfrom
kvcache-ai:support_pd_pp
Aug 5, 2025
Merged

[PD] Refactor parallel sizes and add pp support for mooncake#8571
zhyncs merged 8 commits intosgl-project:mainfrom
kvcache-ai:support_pd_pp

Conversation

@ShangmingCai
Copy link
Copy Markdown
Collaborator

Motivation

  1. Support PD + PP for mooncake
  2. Refactor parallel sizes
  • Use attn_tp_size direclty instead of using tp_size//dp_size
  • Support dp when enable_dp_attention is false

CC: @fzyzcjy

Modifications

Accuracy Test

Benchmark & Profiling

Checklist

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ShangmingCai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Mooncake disaggregation system by introducing comprehensive support for Pipeline Parallelism (PP) and undertaking a substantial refactoring of how parallel sizes are managed. The changes aim to provide more explicit control over tensor and data parallelism configurations, particularly for attention mechanisms, and enable multi-stage model execution across distributed environments.

Highlights

  • Pipeline Parallelism (PP) Integration: Added comprehensive support for Pipeline Parallelism (PP) within the Mooncake disaggregation framework. This includes introducing pp_rank and pp_size parameters across key components like KVArgs, MooncakePrefillManager, and the bootstrap server, enabling multi-stage model execution.
  • Refactored Parallel Size Management: Standardized the handling of parallel sizes by explicitly using attn_tp_size (attention tensor parallelism size) and attn_dp_size instead of implicitly derived values. This provides clearer definitions and usage of tensor and data parallelism within the system.
  • Enhanced Data Parallelism (DP) Support: Improved the system's flexibility for data parallelism by removing previous constraints and allowing for more robust DP configurations, especially when enable_dp_attention is disabled. The system now differentiates between system_dp_size and attn_dp_size.
  • Updated KV Cache Transfer Logic: Modified the KV cache transfer mechanism (send_kvcache, send_kvcache_slice) to correctly distribute and slice KV cache layers across different pipeline parallel stages, ensuring data consistency and efficient transfer in a PP setup for both MLA and non-MLA backends.
  • Bootstrap Server Protocol Update: The bootstrap server's communication protocol has been updated to register and retrieve prefill instances using the refined parallel size parameters (attn_tp_size, attn_dp_size, pp_size, pp_rank, system_dp_size, system_dp_rank), allowing for more granular and accurate coordination of distributed inference.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the parallel size handling and adds Pipeline Parallelism (PP) support for Mooncake. The changes are extensive and touch upon the core logic of disaggregation in SGLang. The refactoring to use attn_tp_size directly is a good simplification. The addition of PP support is a significant feature enhancement.

I've identified a few critical and high-severity issues that need to be addressed to ensure correctness and prevent runtime errors. These include a NameError due to an undefined variable in a log message, an incorrect return type hint and value in a function, and a missing check for a required parameter in an API endpoint.

Once these issues are resolved, the PR should be in good shape.

Comment thread python/sglang/srt/disaggregation/mooncake/conn.py
Comment thread python/sglang/srt/disaggregation/mooncake/conn.py Outdated
Comment thread python/sglang/srt/disaggregation/mooncake/conn.py Outdated
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Copy link
Copy Markdown
Collaborator

@fzyzcjy fzyzcjy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(only review the dp_size related part)

Comment thread python/sglang/srt/disaggregation/mooncake/conn.py Outdated
Comment thread python/sglang/srt/disaggregation/mooncake/conn.py Outdated
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
@whybeyoung
Copy link
Copy Markdown
Collaborator

LGTM

Comment thread python/sglang/srt/disaggregation/base/conn.py
@ShangmingCai ShangmingCai enabled auto-merge (squash) August 1, 2025 04:27
@ShangmingCai ShangmingCai disabled auto-merge August 1, 2025 04:28
@zhyncs zhyncs merged commit d98a491 into sgl-project:main Aug 5, 2025
239 of 276 checks passed
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
…ject#8571)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
…ject#8571)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants