DeepSeek-V3.2 model family equips DeepSeek-V3.1-Terminus with DeepSeek Sparse Attention (DSA) through continued training. With DSA, a fine-grained sparse attention mechanism powered by a lightning indexer, DeepSeek-V3.2 achieves efficiency improvements in long-context scenarios. Note: This document is originally written for the usage of DeepSeek-V3.2-Exp model. The usage of DeepSeek-V3.2 or DeepSeek-V3.2-Speciale is the same as DeepSeek-V3.2-Exp except for the tool call parser. GLM-5 model also applies DSA (DeepSeek Sparse Attention) structure, so it can share most of the usage here, except for the reasoning parser and tool call parser.Documentation Index
Fetch the complete documentation index at: https://docs.sglang.io/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Docker
Command
Build From Source
Command
Launch DeepSeek V3.2/GLM-5 with SGLang
To serve DeepSeek-V3.2-Exp on 8xH200/B200 GPUs:Command
--model argument with zai-org/GLM-5-FP8.
Configuration Tips
- DP Attention: To enable DP Attention, please include
--enable-dp-attention --dp <dp-size>in command. DP Attention is better for large concurrency scenarios. - TP Attention: Launching with TP attention is also supported. TP attention is better for low latency scenarios.
- Short-sequence MHA prefill (adaptive): For short prefill sequences (default threshold: 2048 tokens), the NSA backend uses standard MHA automatically (no extra flags). On H200 (SM90) this path uses the FlashAttention variable-length kernel; on B200 (SM100) it uses TRT-LLM ragged MHA. MHA uses
MHA_ONE_SHOTfor best performance, which computes multi-head attention over all tokens (both cached prefix and newly extended tokens) in a single kernel invocation, avoiding the overhead of chunked KV cache processing. This achieves optimal throughput for short sequences where total sequence length fits within the chunk capacity limit. - MHA prefill threshold relaxation: To apply MHA attention to requests longer than 2048 tokens, please set the flag
SGLANG_NSA_PREFILL_DENSE_ATTN_KV_LEN_THRESHOLDto a value larger than 2048. As threshold grows larger, the prefill performance can be improved, but at the cost of potential accuracy drop. - Choices of Attention Kernels: The attention backend is automatically set to
nsaattention backend for DeepSeek V3.2 model. In this backend, different kernels for sparse prefilling/decoding are implemented, which can be specified by--nsa-prefill-backendand--nsa-decode-backendserver arguments. The choices of nsa prefill/decode attention kernels include:flashmla_sparse:flash_mla_sparse_fwdkernel fromflash_mlalibrary. Can run on both Hopper and Blackwell GPUs. It requires bf16 q, kv inputs.flashmla_kv:flash_mla_with_kvcachekernel fromflash_mlalibrary. Can run on both Hopper and Blackwell GPUs. It requires bf16 q, fp8 k_cache inputs.flashmla_auto: enables automatic selection of eitherflashmla_sparseorflashmla_kvkernel for prefill based on KV cache dtype, hardware, and heuristics. With BF16 KV cache,flashmla_sparseis always used on both Hopper and Blackwell. With FP8 KV cache: On Hopper (SM90), it unconditionally usesflashmla_kv; On Blackwell (SM100), it usesflashmla_sparsewhentotal_kv_tokens < total_q_tokens * 512, otherwise falls back toflashmla_kv. The heuristics may need to be tuned if the performance of either kernel changes significantly.fa3:flash_attn_with_kvcachekernel fromflash_attnlibrary. Can only run on Hopper GPUs. It requires bf16 q, kv inputs.tilelang:tilelangimplementation that can run on GPU, HPU and NPU.aiter: Aiter kernel on AMD HPUs. Can only be used as decode kernel.trtllm:trtllm-mlasparse kernel from flashinfer library. Only run on blackwell GPUs. It requires q,k,v to be uniformly bf16 or fp8_e4m3 format.- On the basis of performance benchmarks, the default configuration of DSA kernels on Hopper and Blackwell are set as follows :
- Bfloat 16 kv cache: On Hopper,
flashmla_sparseprefill attention,fa3decode attention; On Blackwell,flashmla_sparseprefill attention,trtllmdecode attention - Float8_e4m3fn KV cache: On Hopper,
flashmla_kvprefill attention,flashmla_kvdecode attention; On Blackwell,trtllmprefill attention andtrtllmdecode attention.
- Bfloat 16 kv cache: On Hopper,
- Index Cache: Introduce in this paper, IndexCache improves speed by reusing the result of indexer across different layers, only at cost of negligible accuracy loss. For GLM-5 model, we recommend appending
--json-model-override-args '{"index_topk_pattern": "FFSFSSSFSSFFFSSSFFFSFSSSSSSFFSFFSFFSSFFFFFFSFFFFFSFFSSSSSSFSFFFSFSSSFSFFSFFSSS"}'to command for better tradeoff between speedup and performance.
Multi-token Prediction
SGLang implements Multi-Token Prediction (MTP) for DeepSeek V3.2 based on EAGLE speculative decoding. With this optimization, the decoding speed can be improved significantly on small batch sizes. Please look at this PR for more information. Example usage with DP Attention:Command
Command
- The best configuration for
--speculative-num-steps,--speculative-eagle-topkand--speculative-num-draft-tokenscan be searched with bench_speculative.py script for given batch size. The minimum configuration is--speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2, which can achieve speedup for larger batch sizes. - The default value of
--max-running-requestsis set to48for MTP. For larger batch sizes, this value should be increased beyond the default value.
Function Calling and Reasoning Parser
The usage of function calling and reasoning parser is the same as DeepSeek V3.1. Please refer to Reasoning Parser and Tool Parser documents. To launchDeepSeek-V3.2-Exp with function calling and reasoning parser:
Note: It is recommended to specify the chat-template, ensuring that you are within the sglang’s root directory.
Command
DeepSeek-V3.2 with function calling and reasoning parser:
Command
DeepSeek-V3.2-Speciale does not support tool calling, so it can only be launched with the reasoning parser:
Command
GLM-5 with function calling and reasoning parser:
Command
NVFP4 Checkpoint
To launch deepseek v3.2 NVFP4 checkpoint on Blackwell devices, the user needs to specify the quantization method asmodelopt_fp4, and moe runner backend as one of flashinfer_trtllm(recommended), flashinfer_cutlass and flashinfer_cutedsl. Any other usage (parallelism, reasoning parser, …) is the same as FP8 checkpoint.
An example launching command can be:
Command
PD Disaggregation
Prefill Command:Command
Command
Command
Benchmarking Results
Accuracy Test with gsm8k
A simple accuracy benchmark can be tested with gsm8k dataset:
Command
Command
--num-shots 20. The results are very close to the 8 shots results:
Output
Accuracy Test with gpqa-diamond
Accuracy benchmark on long context can be tested on GPQA-diamond dataset with long output tokens and thinking enabled:
Command
Command
Command
Accuracy Test with aime 2025
Prepare the environment by installing NeMo-Skills in the docker or your own virtual environment:
Output
DeepSeek-V3.2 and DeepSeek-V3.2-Speciale:
Output
Output
| evaluation_mode | num_entries | avg_tokens | gen_seconds | symbolic_correct | no_answer |
|---|---|---|---|---|---|
| pass@1[avg-of-4] | 30 | 15040 | 1673 | 87.50% ± 1.67% | 0.00% |
| majority@4 | 30 | 15040 | 1673 | 90.00% | 0.00% |
| pass@4 | 30 | 15040 | 1673 | 90.00% | 0.00% |
| evaluation_mode | num_entries | avg_tokens | gen_seconds | symbolic_correct | no_answer |
|---|---|---|---|---|---|
| pass@1[avg-of-4] | 30 | 13550 | 1632 | 92.50% ± 1.67% | 0.00% |
| majority@4 | 30 | 13550 | 1632 | 94.71% | 0.00% |
| pass@4 | 30 | 13550 | 1632 | 96.67% | 0.00% |
| evaluation_mode | num_entries | avg_tokens | gen_seconds | symbolic_correct | no_answer |
|---|---|---|---|---|---|
| pass@1[avg-of-4] | 30 | 24155 | 3583 | 95.00% ± 1.92% | 0.00% |
| majority@4 | 30 | 24155 | 3583 | 95.83% | 0.00% |
| pass@4 | 30 | 24155 | 3583 | 100.00% | 0.00% |
DSA long sequence context parallel optimization(experimental)
Note: This feature is only verified on Hopper machines For context parallel in DeepSeek V3.2 model, we provide two different modes of splitting tokens, which can be controlled with argument--nsa-prefill-cp-mode.
In sequence splitting
The first mode can be enabled by--nsa-prefill-cp-mode in-seq-split. This mode implements context parallel for DSA by splitting the sequence uniformly between context parallel ranks. At attention stage, each cp rank computes the indexer results of sharded sequence, and collects the whole kv cache through all gather operator. Add attn_cp_size for communication group for context parallel.
Note that the in-sequence splitting mode has the following restrictions:
- The batch size is restricted to 1 for prefill batches
moe_dense_tp_size=1,moe_a2a_backend = "deepep"- To ensure
cp_size > 1, the passed intp_sizemust be larger thandp_size
Command
Round robin splitting (default setting)
This mode can be enabled by specifying the parameter--nsa-prefill-cp-mode round-robin-split, which distributes tokens across ranks based on token_idx % cp_size.
In this scenario, compared to the in-sequence splitting method, it additionally supports the fused MoE backend (the fused MoE backend may deliver better performance than DeepEP in single-machine scenarios), FP8 KV-cache, and multi-batch prefill inference. However, it cannot be enabled with DP attention together.
For more details, please refer to PR https://github.com/sgl-project/sglang/pull/13959.
Example usage:
Command
Pipeline Parallel + Context Parallel (PP + CP)
This mode combines Pipeline Parallelism (PP) and Context Parallelism (CP) to scale across multiple nodes, which can achieve better throughput and Time To First Token (TTFT). Note that this method has only been tested on H20 96G.Standard Usage
To launch with PP=2 and CP (viaround-robin-split mode) on 2 nodes. This configuration uses the fused MoE kernel by default, which generally provides better performance.
For related development details, please refer to:
- Fused MoE + CP support: PR #13959
- PP + CP support: Issue #15358 and PR #16380
Command
Command
PD Disaggregation with PP + CP
If using PD (Prefill-Decode) Disaggregation, the Prefill nodes can be configured with PP + CP as follows. Prefill Node 0:Command
Command
