Skip to content

[DLLM] Add CI for diffusion LLMs#14723

Merged
hnyls2002 merged 4 commits intosgl-project:mainfrom
ClawSeven:dllm-ci
Dec 19, 2025
Merged

[DLLM] Add CI for diffusion LLMs#14723
hnyls2002 merged 4 commits intosgl-project:mainfrom
ClawSeven:dllm-ci

Conversation

@ClawSeven
Copy link
Copy Markdown
Collaborator

@ClawSeven ClawSeven commented Dec 9, 2025

Motivation

Add CI for diffusion LLMs

Benchmarking and Accuracy

Hardware: H20
Tensor parallelism: 4
Batch size: 1

# test_gsm8k

Latency: 101.197 s
Output throughput: 251.480 token/s

metrics={'accuracy': 0.905, 'invalid': 0.0, 'latency': 101.1970702689141, 'output_throughput': 251.47961232843585}
# test_bs_1_speed

+-------------+--------+------------+-----------------+
| Latency (s) | Tokens | Acc Length | Speed (token/s) |
+-------------+--------+------------+-----------------+
|    1.820    |  1109  |   1.000    |     609.43      |
+-------------+--------+------------+-----------------+
speed=609.43
.
----------------------------------------------------------------------
Ran 1 test in 32.869s

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ClawSeven, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes a new Continuous Integration (CI) pipeline for diffusion Large Language Models (LLMs). It integrates testing for the LLaDA2.0-mini model, configuring it to use the LowConfidence DLLM algorithm. The CI setup includes comprehensive performance benchmarks, evaluating both accuracy on the gsm8k dataset and single-batch inference speed, ensuring robust validation of diffusion LLM capabilities.

Highlights

  • New CI for Diffusion LLMs: This pull request introduces a new Continuous Integration (CI) setup specifically designed for diffusion Large Language Models (LLMs), expanding the project's testing capabilities.
  • LLaDA2.0-mini Model Integration: A new test suite has been added to validate the LLaDA2.0-mini model, including its server setup and performance characteristics.
  • LowConfidence DLLM Algorithm Configuration: A configuration file (low_confidence.yaml) and corresponding server argument (--dllm-algorithm LowConfidence) have been added to enable testing with the LowConfidence diffusion LLM algorithm.
  • Performance Benchmarking: The new CI includes benchmarks for the gsm8k dataset to assess accuracy and output throughput, as well as a single-batch speed test to measure inference performance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the foundational elements for Diffusion LLMs (DLLM) within the SGLang framework. It includes a new LowConfidence algorithm, its configuration, and a dedicated test suite. The server arguments have been extended to support DLLM-specific settings, which involve several overrides of default server behaviors. The new test case provides initial validation for the LLaDA2.0-mini model with the LowConfidence algorithm.

"flashinfer",
"--dllm-algorithm",
"LowConfidence", # TODO: Add dLLM configurations
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The TODO comment "Add dLLM configurations" suggests that the DLLM configuration mechanism (e.g., using --dllm-algorithm-config to load a YAML file) might not be fully utilized or tested in this specific test case. If the intention is to test the configurability of DLLM algorithms, it would be beneficial to add a test that explicitly loads and verifies settings from a custom configuration file.

Comment thread test/srt/dllm/configs/low_confidence.yaml Outdated
@@ -0,0 +1,83 @@
import unittest
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mickqian Do you think setting the tests for DLLM in /test/srt/models would be better? Taking into the reality that we may have multiple dllm models, will it be better to have /dllm under /test/srt/models/ ?@ClawSeven what do you think?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it makes sense, we need a dllm folder

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I placed the DLLM tests under test/srt/ because we plan to add more functional unit tests for dLLM there, beyond just model accuracy and throughput.


def test_bs_1_speed(self):
args = BenchArgs(port=int(self.base_url.split(":")[-1]), max_new_tokens=2048)
acc_length, speed = send_one_prompt(args)
Copy link
Copy Markdown
Contributor

@Monstertail Monstertail Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that during the test, the acc_length=1 because we don't trigger the speculative decoding.
In speculative decoding, it has spec_verify_ct in

spec_verify_ct: List[int]
, so we can get acc_length = completion_tokens/spec_verify_ct . However, I wonder whether we need to set a metric to trace the token per forward in diffusion LLMs.
From my understanding, we don't have such a metric right now.

We just want to initialize the CI for dllm in this PR. At the very least, we may need to add this to our roadmap, as it's key to evaluating the generation efficiency. @ClawSeven

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we already have step maps for detailed statistics on every forward pass. We will be submitting these implementations shortly.

Comment thread test/srt/dllm/test_llada2_mini.py Outdated
if is_in_amd_ci():
self.assertGreater(speed, 100)
else:
self.assertGreater(speed, 600)
Copy link
Copy Markdown
Contributor

@Monstertail Monstertail Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that the threshold in other models like deepseek v3 mtp is only 130 tk/s :

self.assertGreater(speed, 130)

Shall we set the threshold so close to the real speed in h20 (~610 tk/s)? Will a safe threshold like 400-500 tk/s be better? I will test it on A100 with tp=4 to double check that. @ClawSeven @mickqian @zhaochenyang20

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, https://github.com/sgl-project/sglang/actions/runs/20228285600/job/58160617536?pr=15180#step:5:2737
[CI Test Method] TestDeepseekV3Basic.test_bs_1_speed
The threshold is 75 token/s, and it can reach 96 token/s in H200.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@Monstertail
Copy link
Copy Markdown
Contributor

Monstertail commented Dec 16, 2025

[Performance in tp=1, L40]

This Llada2- mini model is small.
Even with this setting, we can test:
tp=1
NVIDIA L40
bs=1

[CI Test Method] TestLLaDA2Mini.test_bs_1_speed
+-------------+--------+------------+-----------------+
| Latency (s) | Tokens | Acc Length | Speed (token/s) |
+-------------+--------+------------+-----------------+
|    6.038    |  870   |   1.000    |     144.09      |
+-------------+--------+------------+-----------------+
speed=144.09
[CI Test Method] TestLLaDA2Mini.test_gsm8k

Accuracy: 0.925
Invalid: 0.000
Latency: 361.234 s
Output throughput: 67.848 token/s

[Performance in tp=4, A100]
To be added later....

Copy link
Copy Markdown
Collaborator

@mickqian mickqian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And consider adding more test to nightly-test after this, for performance monitoring

@Monstertail
Copy link
Copy Markdown
Contributor

Monstertail commented Dec 16, 2025

And consider adding more test to nightly-test after this, for performance monitoring

Based on the current test cases, it looks like the nightly tests have the performance tests in large models like DeepSeek-V3. We may add a large model like llada2-flash 100B into this in the future.

Currently, we already have the performance test in llada2-mini, which should be enough for this PR. @ClawSeven

@ClawSeven
Copy link
Copy Markdown
Collaborator Author

/tag-and-rerun-ci

Comment thread test/srt/run_suite.py Outdated
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

This PR looks clean. Great job.

@Monstertail
Copy link
Copy Markdown
Contributor

Monstertail commented Dec 17, 2025

The current CI for llada2-mini is in the setting of 4GPUs.
The ci is skipped here: https://github.com/sgl-project/sglang/actions/runs/20286966160/job/58286368592?pr=14723

We may add it to the 2-GPU setting, as the model is super small.

@ClawSeven
Copy link
Copy Markdown
Collaborator Author

TP2 / H20

Accuracy: 0.930
Invalid: 0.000
Latency: 101.512 s
Output throughput: 248.148 token/s
metrics={'accuracy': 0.93, 'invalid': 0.0, 'latency': 101.51187389157712, 'output_throughput': 248.14831048144134}

Let me know if you'd like to add databases, authentication, or Docker!

+-------------+--------+------------+-----------------+
| Latency (s) | Tokens | Acc Length | Speed (token/s) |
+-------------+--------+------------+-----------------+
| 1.763 | 688 | 1.000 | 390.17 |
+-------------+--------+------------+-----------------+
speed=390.17
.

Ran 1 test in 32.810s

@ClawSeven
Copy link
Copy Markdown
Collaborator Author

TP1

Accuracy: 0.930
Invalid: 0.000
Latency: 130.141 s
Output throughput: 190.724 token/s
metrics={'accuracy': 0.93, 'invalid': 0.0, 'latency': 130.14087017811835, 'output_throughput': 190.7240974032872}
.
----------------------------------------------------------------------
Ran 1 test in 431.747s

OK
---

Let me know if you'd like to add authentication, SQLite, or database integration!

+-------------+--------+------------+-----------------+
| Latency (s) | Tokens | Acc Length | Speed (token/s) |
+-------------+--------+------------+-----------------+
|    2.187    |  672   |   1.000    |     307.21      |
+-------------+--------+------------+-----------------+
speed=307.21
.
----------------------------------------------------------------------
Ran 1 test in 33.225s

OK

Copy link
Copy Markdown
Collaborator

@zhaochenyang20 zhaochenyang20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unit test is good.

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci try again

1 similar comment
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci try again

@hnyls2002 hnyls2002 merged commit b2803ff into sgl-project:main Dec 19, 2025
238 of 253 checks passed
xiaobaicxy added a commit to xiaobaicxy/sglang that referenced this pull request Dec 19, 2025
* 'main' of https://github.com/sgl-project/sglang: (136 commits)
  fix: unreachable error check in retraction (sgl-project#15433)
  [sgl-kernel] chore: update deepgemm version (sgl-project#13402)
  [diffusion] multi-platform: support diffusion on amd and fix encoder loading on MI325 (sgl-project#13760)
  [amd] Add deterministic all-reduce kernel for AMD (ROCm) (sgl-project#15340)
  [diffusion] refactor: refactor _build_req_from_sampling to use shallow_asdict (sgl-project#13782)
  Add customized sampler registration (sgl-project#15423)
  Update readme (sgl-project#15425)
  Fix Mindspore model import warning (sgl-project#15287)
  [Feature] Xiaomi `MiMo-V2-Flash` day0 support (sgl-project#15207)
  [diffusion] profiling: add bench_serving.py and VBench (sgl-project#15410)
  [DLLM] Fix dLLM regression (sgl-project#15371)
  [Deepseek V3.2] Fix Deepseek MTP in V1 mode (sgl-project#15429)
  chore: update CI_PERMISSIONS (sgl-project#15431)
  [DLLM] Add CI for diffusion LLMs (sgl-project#14723)
  Support using different attention backend for draft decoding. (sgl-project#14843)
  feat(dsv32): better error handling for DeepSeek-v3.2 encoder (sgl-project#14353)
  tiny fix lint on main (sgl-project#15424)
  multimodal: precompute hash for MultimodalDataItem (sgl-project#14354)
  [AMD] Clear pre-built AITER kernels and warmup to prevent segfaults and test timeouts (sgl-project#15318)
  [Performance] optimize NSA backend metadata computation for multi-step speculative decoding (sgl-project#14781)
  ...
Prozac614 pushed a commit to Prozac614/sglang that referenced this pull request Dec 23, 2025
jiaming1130 pushed a commit to zhuyijie88/sglang that referenced this pull request Dec 25, 2025
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants