[Ascend] Deepseek v3 and v3.2 support Context Parallelism#12207
[Ascend] Deepseek v3 and v3.2 support Context Parallelism#12207zhuyijie88 wants to merge 1 commit intosgl-project:mainfrom
Conversation
Summary of ChangesHello @zhuyijie88, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces context parallelism as a new dimension for distributed model execution and significantly refines the multi-token prediction (speculative decoding) pipeline, with a strong focus on optimizing performance and compatibility for Ascend NPUs. The changes ensure that the system can efficiently scale and manage complex inference workloads across various hardware configurations, particularly by adapting communication patterns and kernel implementations for NPU environments. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces context parallelism (CP) support, primarily for Ascend NPUs, and includes several related changes for speculative decoding and disaggregated serving. The changes are extensive, touching parallel state management, communication operations, attention backends, and the scheduler. While the overall direction seems correct, there are a few areas that could be improved for clarity and correctness, particularly around process group usage in context parallelism and logic for handling different parallelism modes.
57b135e to
535eb68
Compare
dcc612c to
1b2138c
Compare
|
Notes:
|
Co-author: @zhuyijie88 @Todobe @ichaoren @ZhengdQin
Motivation
dsv3.2 LLM has been released for long context inference recently. In the case of long context, the model inference time of prefill stage is bounded by dense computing. The computation load in the indexer module of Attention Layer scales with the square of context length
S. To alleviate the problem, we introduceContext Parallel(CP) method, which splits context into multiple pieces, so that each rank gets a part of context and reduce the TTFT computation and activation memory footprint. The modification has no influence on the MOE layer, as it reuse the same strategy (EP) as dsv3. Attention weights are not partitioned in CP, so it risks OOM. In this case, a hybrid parallelism of CP and TP may be required.Modifications
--cp-sizeparamater in server_args.pyactual_seq_qlenof sparse_flash_attention needs to be update according to CPLogitsProcessorThe figure below shows our Parallelism strategy across the network.

Accuracy Tests
Launch the router:
We test 200 questions in the GMS8K dataset with 8CP-2TP configuration of one prefill node and 16TP configuration of one decode node. The accuracy is 0.97, which indicates correctness.
Benchmarking and Profiling
Under the 32CP configuration of two prefill nodes and 16TP configuration of one decode nodes, the 64K context query cost 5s compared to 30s under the 32TP configuration of two prefill nodes.
More results of other CP configurations are listed below:

Checklist