[DLLM] Add CI for diffusion LLMs#14723
Conversation
Summary of ChangesHello @ClawSeven, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a new Continuous Integration (CI) pipeline for diffusion Large Language Models (LLMs). It integrates testing for the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces the foundational elements for Diffusion LLMs (DLLM) within the SGLang framework. It includes a new LowConfidence algorithm, its configuration, and a dedicated test suite. The server arguments have been extended to support DLLM-specific settings, which involve several overrides of default server behaviors. The new test case provides initial validation for the LLaDA2.0-mini model with the LowConfidence algorithm.
| "flashinfer", | ||
| "--dllm-algorithm", | ||
| "LowConfidence", # TODO: Add dLLM configurations | ||
| ] |
There was a problem hiding this comment.
The TODO comment "Add dLLM configurations" suggests that the DLLM configuration mechanism (e.g., using --dllm-algorithm-config to load a YAML file) might not be fully utilized or tested in this specific test case. If the intention is to test the configurability of DLLM algorithms, it would be beneficial to add a test that explicitly loads and verifies settings from a custom configuration file.
| @@ -0,0 +1,83 @@ | |||
| import unittest | |||
There was a problem hiding this comment.
@mickqian Do you think setting the tests for DLLM in /test/srt/models would be better? Taking into the reality that we may have multiple dllm models, will it be better to have /dllm under /test/srt/models/ ?@ClawSeven what do you think?
There was a problem hiding this comment.
yes it makes sense, we need a dllm folder
There was a problem hiding this comment.
I placed the DLLM tests under test/srt/ because we plan to add more functional unit tests for dLLM there, beyond just model accuracy and throughput.
|
|
||
| def test_bs_1_speed(self): | ||
| args = BenchArgs(port=int(self.base_url.split(":")[-1]), max_new_tokens=2048) | ||
| acc_length, speed = send_one_prompt(args) |
There was a problem hiding this comment.
I noticed that during the test, the acc_length=1 because we don't trigger the speculative decoding.
In speculative decoding, it has spec_verify_ct in
From my understanding, we don't have such a metric right now.
We just want to initialize the CI for dllm in this PR. At the very least, we may need to add this to our roadmap, as it's key to evaluating the generation efficiency. @ClawSeven
There was a problem hiding this comment.
Actually, we already have step maps for detailed statistics on every forward pass. We will be submitting these implementations shortly.
| if is_in_amd_ci(): | ||
| self.assertGreater(speed, 100) | ||
| else: | ||
| self.assertGreater(speed, 600) |
There was a problem hiding this comment.
I noticed that the threshold in other models like deepseek v3 mtp is only 130 tk/s :
sglang/test/srt/test_deepseek_v3_mtp.py
Line 104 in 0e7d796
Shall we set the threshold so close to the real speed in h20 (~610 tk/s)? Will a safe threshold like 400-500 tk/s be better? I will test it on A100 with tp=4 to double check that. @ClawSeven @mickqian @zhaochenyang20
There was a problem hiding this comment.
FYI, https://github.com/sgl-project/sglang/actions/runs/20228285600/job/58160617536?pr=15180#step:5:2737
[CI Test Method] TestDeepseekV3Basic.test_bs_1_speed
The threshold is 75 token/s, and it can reach 96 token/s in H200.
|
[Performance in tp=1, L40] This Llada2- mini model is small. [Performance in tp=4, A100] |
Based on the current test cases, it looks like the nightly tests have the performance tests in large models like DeepSeek-V3. We may add a large model like llada2-flash 100B into this in the future. Currently, we already have the performance test in llada2-mini, which should be enough for this PR. @ClawSeven |
|
/tag-and-rerun-ci |
|
This PR looks clean. Great job. |
|
The current CI for llada2-mini is in the setting of 4GPUs. We may add it to the 2-GPU setting, as the model is super small. |
TP2 / H20Accuracy: 0.930 Let me know if you'd like to add databases, authentication, or Docker! +-------------+--------+------------+-----------------+
|
TP1 |
zhaochenyang20
left a comment
There was a problem hiding this comment.
Unit test is good.
|
/rerun-failed-ci |
|
/rerun-failed-ci try again |
1 similar comment
|
/rerun-failed-ci try again |
* 'main' of https://github.com/sgl-project/sglang: (136 commits) fix: unreachable error check in retraction (sgl-project#15433) [sgl-kernel] chore: update deepgemm version (sgl-project#13402) [diffusion] multi-platform: support diffusion on amd and fix encoder loading on MI325 (sgl-project#13760) [amd] Add deterministic all-reduce kernel for AMD (ROCm) (sgl-project#15340) [diffusion] refactor: refactor _build_req_from_sampling to use shallow_asdict (sgl-project#13782) Add customized sampler registration (sgl-project#15423) Update readme (sgl-project#15425) Fix Mindspore model import warning (sgl-project#15287) [Feature] Xiaomi `MiMo-V2-Flash` day0 support (sgl-project#15207) [diffusion] profiling: add bench_serving.py and VBench (sgl-project#15410) [DLLM] Fix dLLM regression (sgl-project#15371) [Deepseek V3.2] Fix Deepseek MTP in V1 mode (sgl-project#15429) chore: update CI_PERMISSIONS (sgl-project#15431) [DLLM] Add CI for diffusion LLMs (sgl-project#14723) Support using different attention backend for draft decoding. (sgl-project#14843) feat(dsv32): better error handling for DeepSeek-v3.2 encoder (sgl-project#14353) tiny fix lint on main (sgl-project#15424) multimodal: precompute hash for MultimodalDataItem (sgl-project#14354) [AMD] Clear pre-built AITER kernels and warmup to prevent segfaults and test timeouts (sgl-project#15318) [Performance] optimize NSA backend metadata computation for multi-step speculative decoding (sgl-project#14781) ...
Motivation
Add CI for diffusion LLMs
Benchmarking and Accuracy
Hardware: H20
Tensor parallelism: 4
Batch size: 1
Checklist