Skip to content

[Ascend] Deepseek v3 and v3.2 support Context Parallelism#12207

Closed
zhuyijie88 wants to merge 1 commit intosgl-project:mainfrom
zhuyijie88:dsv32_cp
Closed

[Ascend] Deepseek v3 and v3.2 support Context Parallelism#12207
zhuyijie88 wants to merge 1 commit intosgl-project:mainfrom
zhuyijie88:dsv32_cp

Conversation

@zhuyijie88
Copy link
Copy Markdown
Contributor

@zhuyijie88 zhuyijie88 commented Oct 27, 2025

Co-author: @zhuyijie88 @Todobe @ichaoren @ZhengdQin

Motivation

dsv3.2 LLM has been released for long context inference recently. In the case of long context, the model inference time of prefill stage is bounded by dense computing. The computation load in the indexer module of Attention Layer scales with the square of context length S. To alleviate the problem, we introduce Context Parallel (CP) method, which splits context into multiple pieces, so that each rank gets a part of context and reduce the TTFT computation and activation memory footprint. The modification has no influence on the MOE layer, as it reuse the same strategy (EP) as dsv3. Attention weights are not partitioned in CP, so it risks OOM. In this case, a hybrid parallelism of CP and TP may be required.

Modifications

  • CP reuses the DP communication domain, and only one of CP or DP can be selected in prefill.
  • add --cp-size paramater in server_args.py
  • use the full kv buffer. Before saving the kv cache, a all_gather operator is required to collect it from all cp ranks.
  • In the case of PD disaggregation, recomputing the prefill ranks mapping to decode ranks to transfer the kv data.
  • the input actual_seq_qlen of sparse_flash_attention needs to be update according to CP
  • A all_gather operator is needed to refine the logit computation stage in LogitsProcessor

The figure below shows our Parallelism strategy across the network.
image

Accuracy Tests

# PREFILL LAUNCH COMMEND
IPs=('x.x.x.x')  # ONE PREFILL NODES
export p0=x.x.x.x  # IP
export d0=x.x.x.x  # IP
nnodes=${#IPs[@]}
tp_size=`expr 16 \* ${nnodes}`
export ASCEND_MF_STORE_URL=tcp://${p0}:24667

python3 -m sglang.launch_server --model-path ${MODEL_PATH} \
--tp $tp_size \
--cp-size 8 --mem-fraction-static 0.9 \
--max-total-tokens 10240 \
--trust-remote-code \
--attention-backend ascend \
--device npu \
--watchdog-timeout 9000 \
--host 0.0.0.0 --port 30002 \
--disable-radix-cache \
--max-running-requests 16 \
--disable-overlap-schedule \
--nnodes $nnodes --node-rank $VC_TASK_INDEX \
--disable-cuda-graph \
--skip-server-warmup \
--quantization w8a8_int8 \
--disaggregation-transfer-backend ascend \
--disaggregation-mode prefill \
--moe-a2a-backend deepep --deepep-mode auto \
--context-length 66000 --chunked-prefill-size 327680 --max-prefill-tokens 66000 \
--dist-init-addr ${p0}:10000 2>&1 | tee launch.log &
# DECODE LAUNCH COMMEND
IPs=('x.x.x.x')  # ONE DECODE NODES
export p0=x.x.x.x  # IP
export d0=x.x.x.x  # IP
nnodes=${#IPs[@]}
tp_size=`expr 16 \* ${nnodes}`
export ASCEND_MF_STORE_URL=tcp://${p0}:24667

python3 -m sglang.launch_server --model-path ${MODEL_PATH} \
--tp $tp_size \
--mem-fraction-static 0.8 \
--max-total-tokens 10240 \
--trust-remote-code \
--attention-backend ascend \
--device npu \
--watchdog-timeout 9000 \
--host 0.0.0.0 --port 30002 \
--disable-radix-cache \
--max-running-requests 16 \
--disable-overlap-schedule \
--nnodes $nnodes --node-rank $VC_TASK_INDEX \
--disable-cuda-graph \
--disaggregation-transfer-backend ascend \
--disaggregation-mode decode \
--context-length 66000 --chunked-prefill-size 327680 --max-prefill-tokens 66000 \
--quantization w8a8_int8 | tee launch.log &

Launch the router:

# on the prefill node
export p0=x.x.x.x  # IP
export d0=x.x.x.x  # IP
python3 -m sglang_router.launch_router --decode http://${d0}:30002 --prefill http://${p0}:30002 --pd-disaggregation --policy cache_aware --mini-lb --host 0.0.0.0 --port 8000

We test 200 questions in the GMS8K dataset with 8CP-2TP configuration of one prefill node and 16TP configuration of one decode node. The accuracy is 0.97, which indicates correctness.

cd python/sglang/test
python3 few_shot_gsm8k.py --data-path "test.jsonl.txt" --parallel 16 --num-questions 200 --num-shots 5 --port 8000 --temperature 0 | tee answer.log

Benchmarking and Profiling

Under the 32CP configuration of two prefill nodes and 16TP configuration of one decode nodes, the 64K context query cost 5s compared to 30s under the 32TP configuration of two prefill nodes.

More results of other CP configurations are listed below:
image

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zhuyijie88, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces context parallelism as a new dimension for distributed model execution and significantly refines the multi-token prediction (speculative decoding) pipeline, with a strong focus on optimizing performance and compatibility for Ascend NPUs. The changes ensure that the system can efficiently scale and manage complex inference workloads across various hardware configurations, particularly by adapting communication patterns and kernel implementations for NPU environments.

Highlights

  • Context Parallelism (CP) Introduction: Introduced a new distributed group for Context Parallelism (CP) with associated functions to manage ranks and world sizes. This new parallelism dimension is integrated across various components, including disaggregation managers, bootstrap servers, schedulers, and model runners.
  • Enhanced Multi-Token Prediction (MTP) for Ascend NPUs: Significantly enhanced multi-token prediction (speculative decoding) capabilities, especially for Ascend NPUs. This includes new NPU-specific attention backends, a dedicated NPU graph runner for draft extensions, and native Python/PyTorch implementations for tree building and greedy verification kernels to optimize performance on Ascend hardware.
  • Distributed Communication and Routing Updates: Modified distributed communication logic to account for CP, affecting how prefill parallel information is handled, attention tensor parallelism (TP) sizes are calculated, and requests are routed. The data parallelism (DP) size calculation now considers CP size, and data scattering/gathering is adjusted for CP groups.
  • NPU Compatibility and Optimization: Implemented extensive NPU compatibility changes, including conditional use of torch.npu specific functions for memory management, graph operations, and cache location assignments. Several torch.compile decorators are conditionally disabled for NPU devices in speculative decoding utilities to ensure compatibility.
  • Speculative Decoding State Management: Improved the management of speculative decoding states and buffers, including refined logic for handling idle batches, future indices, and sequence length adjustments across different forward modes (e.g., draft_extend_v2, target_verify).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces context parallelism (CP) support, primarily for Ascend NPUs, and includes several related changes for speculative decoding and disaggregated serving. The changes are extensive, touching parallel state management, communication operations, attention backends, and the scheduler. While the overall direction seems correct, there are a few areas that could be improved for clarity and correctness, particularly around process group usage in context parallelism and logic for handling different parallelism modes.

Comment thread python/sglang/srt/disaggregation/mooncake/conn.py
Comment thread python/sglang/srt/disaggregation/prefill.py
Comment thread python/sglang/srt/models/deepseek_v2.py Outdated
@ping1jing2 ping1jing2 marked this pull request as draft October 27, 2025 14:10
@ping1jing2 ping1jing2 changed the title Dsv32 cp and mtp [Ascend]Dsv32 cp and mtp Oct 27, 2025
@zhuyijie88 zhuyijie88 force-pushed the dsv32_cp branch 2 times, most recently from 57b135e to 535eb68 Compare October 28, 2025 08:56
@zhuyijie88 zhuyijie88 changed the title [Ascend]Dsv32 cp and mtp [Ascend] Deepseek v3 and v3.2 support Context Parallelism and MTP Oct 29, 2025
@zhuyijie88 zhuyijie88 force-pushed the dsv32_cp branch 2 times, most recently from dcc612c to 1b2138c Compare October 31, 2025 03:53
Copy link
Copy Markdown
Member

@sglang-bot sglang-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important parallelism algorithm design:

  • need a figure to illustrate the parallelism strategy, needed communication volume/primitives, and scalability (CP1/CP2/CP8/ ... / CP32).

@zhuyijie88 zhuyijie88 changed the title [Ascend] Deepseek v3 and v3.2 support Context Parallelism and MTP [Ascend] Deepseek v3 and v3.2 support Context Parallelism Nov 3, 2025
@zhuyijie88 zhuyijie88 marked this pull request as ready for review November 7, 2025 04:08
@zhuyijie88 zhuyijie88 requested a review from Fridge003 as a code owner November 7, 2025 04:08
@sglang-bot
Copy link
Copy Markdown
Member

sglang-bot commented Nov 9, 2025

Notes:

  • This is only for prefill instances and can only be run with PD disaggregation
  • Is this useful for V3 as well?
  • Why is it better than TP? Can you compare the breakdown cost (communication cost, compute cost) x (TP, CP) x (v3.1, v3.2)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants