Skip to content

[DLLM] Basic dLLM scheduling strategy and implementation#17484

Merged
ispobock merged 2 commits intosgl-project:mainfrom
ClawSeven:dllm-pd-rebase-1
Feb 10, 2026
Merged

[DLLM] Basic dLLM scheduling strategy and implementation#17484
ispobock merged 2 commits intosgl-project:mainfrom
ClawSeven:dllm-pd-rebase-1

Conversation

@ClawSeven
Copy link
Copy Markdown
Collaborator

@ClawSeven ClawSeven commented Jan 21, 2026

Motivation

The previous dLLM scheduler relied on a chunked-prefill mechanism, which limited the implementation of efficient scheduling strategies. This PR introduces a new scheduling architecture.

Modifications

This PR focuses on refactoring the dLLM scheduling implementation. Previously, the scheduler would dynamically batch all blocks together for computation. Now, I've separated prefill and decode batches to eliminate redundant calculations that occurred when prefill and decode blocks were processed together. This lays the groundwork for implementing early exit and overlap scheduling in future iterations.

To maintain clean separation, the changes are consolidated in a new scheduler_dllm_mixin.py file. This keeps the dLLM request scheduling logic contained and prevents interference with the main AR branch execution flow.

Accuracy Tests, Benchmarking and Profiling

4*H20 / TP4 / BS4 / LLaDA2.0-mini / Cuda Graph bs [1,2,3,4] / gsm8k

87.66 token/s -> 484.5 token/s

W/ this  PR:

Accuracy: 0.925
Invalid: 0.000
Latency: 52.095 s
Output throughput: 484.502 token/s
metrics={'accuracy': 0.925, 'invalid': 0.0, 'latency': 52.094728492200375, 'output_throughput': 484.5019972371856}
.
----------------------------------------------------------------------
W/O this PR:

Accuracy: 0.900
Invalid: 0.000
Latency: 287.565 s
Output throughput: 87.660 token/s
metrics={'accuracy': 0.9, 'invalid': 0.0, 'latency': 287.5653794184327, 'output_throughput': 87.66006551616272}
.
----------------------------------------------------------------------

4*H20 / TP1 / BS4 / LLaDA2.0-mini / Cuda Graph bs [1,2,3,4] / gsm8k

94.98 token/s -> 288.14 token/s

Accuracy: 0.915
Invalid: 0.000
Latency: 86.117 s
Output throughput: 288.142 token/s
metrics={'accuracy': 0.915, 'invalid': 0.0, 'latency': 86.11712778359652, 'output_throughput': 288.1424478340131}

----------------------------------------------------------------------


Accuracy: 0.915
Invalid: 0.000
Latency: 259.301 s
Output throughput: 94.982 token/s
metrics={'accuracy': 0.915, 'invalid': 0.0, 'latency': 259.30089492350817, 'output_throughput': 94.98231777127253}

----------------------------------------------------------------------

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ClawSeven, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant architectural refactor for Diffusion LLM (dLLM) scheduling within the system. The primary goal is to establish a clear and robust separation between prefill and decode batch processing for dLLM requests. This is achieved by introducing dedicated classes and methods that manage the lifecycle and scheduling of dLLM requests, from their initial incoming state through staging and execution. The changes enhance the system's ability to handle dLLM workloads efficiently and lay the groundwork for future optimizations specific to these distinct processing phases.

Highlights

  • DLLM Request Phase Management: Introduced a new DllmReqPhase enum (STAGING_PREFILL, STAGING_DECODE, INCOMING_PREFILL, INCOMING_DECODE, NOT_DLLM) within the Req class to precisely track the state of Diffusion LLM requests, enabling distinct handling for prefill and decode operations.
  • Dedicated DLLM Scheduling Architecture: The core Diffusion LLM scheduling logic has been extracted and refactored into a new SchedulerDllmMixin class and an accompanying DllmManager class. This modularization improves code organization and allows for specialized scheduling policies for dLLM.
  • Separated Prefill and Decode Batch Processing: The scheduler now explicitly differentiates between dLLM prefill and decode batches, with a new get_new_batch_dllm method in SchedulerDllmMixin responsible for orchestrating the processing of these distinct batch types.
  • Enhanced Request State Determination: The Req class now includes methods like is_dllm_prefill and determine_dllm_phase to dynamically ascertain whether a dLLM request is in a prefill or decode stage based on its input content and dllm_config.
  • New DLLM_DECODE Forward Mode: A DLLM_DECODE mode has been added to the ForwardMode enum, indicating specific forward pass behavior for dLLM decode operations, complementing the existing DLLM_EXTEND mode.
  • Compatibility and Feature Restrictions: Several features, including hierarchical cache, LoRA, disaggregation, and mixed chunked prefill, are now explicitly disabled or warned against when dLLM inference is enabled, ensuring compatibility and preventing unsupported configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant refactoring of the dLLM scheduling architecture to separate prefill and decode batches. The changes include replacing DllmStagingReqs with a new DllmManager and a SchedulerDllmMixin to better encapsulate the dLLM-specific logic. This is a good architectural improvement that enhances modularity.

My review has identified one critical issue in the new DllmManager that could lead to requests being dropped during scheduling. I have also included a medium-severity suggestion to improve code conciseness. Please address the critical issue to ensure the correctness of the new scheduling logic.

Comment on lines +315 to +319
def init_next_round(self) -> None:
"""Initialize staging requests for next round and clear staging queue."""
for req in self.staging_queue:
req.init_next_round_input()
self.staging_queue = []
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

In init_next_round, the staging_queue is cleared after processing, but the requests within it are not re-queued. This will cause unfinished chunked dLLM requests to be dropped from scheduling, leading to requests hanging. The requests from staging_queue should be moved to waiting_queue to be considered for the next scheduling cycle.

Suggested change
def init_next_round(self) -> None:
"""Initialize staging requests for next round and clear staging queue."""
for req in self.staging_queue:
req.init_next_round_input()
self.staging_queue = []
def init_next_round(self) -> None:
"""Initialize staging requests for next round and move them to the waiting queue."""
for req in self.staging_queue:
req.init_next_round_input()
self.waiting_queue.extend(self.staging_queue)
self.staging_queue = []

Comment thread python/sglang/srt/managers/schedule_batch.py Outdated
@ClawSeven ClawSeven changed the title [DLLM] dLLM scheduling arch refactor for prefill/decode batch seperation [DLLM] Basic dLLM scheduling strategy and implementation Jan 22, 2026
@ClawSeven ClawSeven force-pushed the dllm-pd-rebase-1 branch 2 times, most recently from 5dbfb88 to 48ad32d Compare January 22, 2026 09:51
@ClawSeven ClawSeven marked this pull request as ready for review January 22, 2026 09:52
@ClawSeven
Copy link
Copy Markdown
Collaborator Author

/tag-and-rerun-ci

@Monstertail
Copy link
Copy Markdown
Contributor

Monstertail commented Jan 27, 2026

Why with this PR, in the test with the setting 4*H20 / TP4 / BS4 / LLaDA2.0-mini / Cuda Graph bs [1,2,3,4] / gsm8,
The accuracy will be higher (0.925 V.S. 0.900) than that without this PR? @ClawSeven

@Monstertail
Copy link
Copy Markdown
Contributor

Monstertail commented Jan 27, 2026

Here is a summary of the key changes in this PR, I will add detailed reviews after Jan 29. @ClawSeven @zhaochenyang20

  • Motivation: The goal is to support native-style dLLM scheduling with prefill prioritization and separate the dLLM batching path from AR as much as possible.

  • Decoupling:

    • The PR separates the execution flows for prefill and decode (splitting get_next_batch_dllm).
    • This cleans up the dLLM components from AR path and paves the way for better optimization of DLLM batching in the future.
  • Refactoring:

    • Centralization: Scattered DLLM logic is centralized into DllmMixin.
    • Pipeline Management: DllmManager now handles the Waiting -> Staging -> Batch pipeline using a resource adder filter.
    • Code Hygiene: The get_new_batch_prefill path is now clean, with DLLM logic isolated in get_new_batch_prefill_dllm.
  • State Machine:

    • DllmReqPhase (Incoming/Staging+Prefill/Decode) is now dynamically determined by mask status in determine_dllm_phase().
  • Result:

    • Eliminating the redundant computation from mixed chunks improved throughput from 200 to 484 tokens/s.

Comment thread python/sglang/srt/managers/schedule_batch.py Outdated
Signed-off-by: Zehuan Li <lizehuan.lzh@antgroup.com>
@ClawSeven
Copy link
Copy Markdown
Collaborator Author

ClawSeven commented Feb 10, 2026

/rerun-failed-ci

@ispobock ispobock merged commit 26f2b37 into sgl-project:main Feb 10, 2026
281 of 318 checks passed
@ClawSeven ClawSeven mentioned this pull request Feb 10, 2026
5 tasks
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
…#17484)

Signed-off-by: Zehuan Li <lizehuan.lzh@antgroup.com>
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
…#17484)

Signed-off-by: Zehuan Li <lizehuan.lzh@antgroup.com>
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
…#17484)

Signed-off-by: Zehuan Li <lizehuan.lzh@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants