Skip to content

Revert "Refactor graph input buffers (#18991)"#19173

Merged
Fridge003 merged 1 commit intomainfrom
revert
Feb 23, 2026
Merged

Revert "Refactor graph input buffers (#18991)"#19173
Fridge003 merged 1 commit intomainfrom
revert

Conversation

@Fridge003
Copy link
Copy Markdown
Collaborator

This reverts commit 84c67c8.

Motivation

#18991 broke the MIMO CI
https://github.com/sgl-project/sglang/actions/runs/22256775640/job/64388810640

cc @ch-wan

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Fridge003, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue by reverting a recent refactoring of CUDA graph input buffers. The previous changes, intended to improve buffer management, inadvertently caused a failure in the MIMO continuous integration tests. By rolling back these modifications, the PR aims to restore stability to the CI pipeline and ensure the correct functioning of the system's graph execution. This ensures that the core buffer handling logic returns to a known working state.

Highlights

  • Revert of Graph Input Buffer Refactoring: The pull request reverts the changes introduced in PR Refactor graph input buffers #18991, which aimed to refactor graph input buffers. This rollback was necessary due to a critical failure in the MIMO CI pipeline.
  • Buffer Class Restructuring: The DecodeInputBuffers and ForwardInputBuffers classes, previously introduced, have been removed. The system now reverts to using the GraphInputBuffers class for managing CUDA graph inputs, simplifying the buffer management hierarchy.
  • Elimination of Global Buffer Sharing: The global buffer sharing mechanism, including the _forward_input_buffer_pool and associated share_buffers methods, has been removed. Input buffer attributes are now directly managed within their respective runner classes, undoing the intermediate buffer object pattern.
  • Direct Buffer Attribute Access: Code across various CUDA graph runner implementations has been updated to access buffer attributes directly (e.g., self.attribute_name) instead of through an intermediate self.buffers.attribute_name object, restoring the previous access pattern.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/model_executor/cuda_graph_runner.py
    • Removed dataclass and Dict imports
    • Removed compute_local_num_token_non_padded import
    • Updated import from ForwardInputBuffers to GraphInputBuffers
    • Removed the DecodeInputBuffers dataclass definition and its methods
    • Changed type hint for self.buffers from DecodeInputBuffers to GraphInputBuffers
    • Removed the call to self.buffers.share_buffers()
    • Updated type hint for buffers in capture_one_batch_size from DecodeInputBuffers to GraphInputBuffers
    • Modified replay_prepare to capture and pass the seq_lens_cpu return value from populate_from_forward_batch
  • python/sglang/srt/model_executor/input_buffers.py
    • Removed fields import from dataclasses
    • Removed the global _forward_input_buffer_pool and its associated buffer sharing logic
    • Replaced ForwardInputBuffers with the GraphInputBuffers dataclass, consolidating buffer definitions and the populate_from_forward_batch method
    • Updated populate_from_forward_batch to return seq_lens_cpu
  • python/sglang/srt/model_executor/model_runner.py
    • Removed import of DecodeInputBuffers
    • Added import of GraphInputBuffers
    • Updated type hint for buffers in _dummy_run from DecodeInputBuffers to GraphInputBuffers
  • python/sglang/srt/model_executor/piecewise_cuda_graph_runner.py
    • Removed dataclass and Optional imports
    • Removed import of ForwardInputBuffers
    • Removed the PrefillInputBuffers dataclass definition
    • Migrated buffer attribute assignments from local variables to self attributes
    • Removed instantiation of PrefillInputBuffers and self.buffers.share_buffers() call
    • Updated warmup_torch_compile, capture_one_batch_size, and replay_prepare to access buffer attributes directly from self
  • python/sglang/srt/speculative/eagle_draft_cuda_graph_runner.py
    • Removed dataclass and Optional imports
    • Removed import of ForwardInputBuffers
    • Removed the EagleDraftInputBuffers dataclass definition
    • Changed seq_lens_cpu from a local variable to a self attribute
    • Migrated buffer attribute assignments from local variables to self attributes
    • Removed instantiation of EagleDraftInputBuffers and self.buffers.share_buffers() call
    • Updated capture_one_batch_size and replay to access buffer attributes directly from self
  • python/sglang/srt/speculative/eagle_draft_extend_cuda_graph_runner.py
    • Removed dataclass and Optional imports
    • Removed import of ForwardInputBuffers
    • Removed the EagleDraftExtendInputBuffers dataclass definition
    • Changed seq_lens_cpu from a local variable to a self attribute
    • Migrated buffer attribute assignments from local variables to self attributes
    • Removed instantiation of EagleDraftExtendInputBuffers and self.buffers.share_buffers() call
    • Updated capture_one_batch_size, get_forward_batch, replay, and init_replay_state to access buffer attributes directly from self
  • python/sglang/srt/speculative/multi_layer_eagle_draft_extend_cuda_graph_runner.py
    • Removed dataclass, List, and Optional imports
    • Removed import of ForwardInputBuffers
    • Removed the MultiLayerEagleDraftExtendInputBuffers dataclass definition
    • Changed seq_lens_cpu from a local variable to a self attribute
    • Migrated buffer attribute assignments from local variables to self attributes
    • Removed instantiation of MultiLayerEagleDraftExtendInputBuffers and self.buffers.share_buffers() call
    • Updated get_forward_batch, capture_one_batch_size, run_once, init_replay_state, and replay to access buffer attributes directly from self
    • Changed type hint for self.runners from List[Optional[MultiLayerEagleDraftExtendCudaGraphRunner]] to List
  • python/sglang/srt/speculative/multi_layer_eagle_worker_v2.py
    • Updated _draft_extend_for_decode to access buffer attributes directly from last_cuda_graph_runner instead of last_cuda_graph_runner.buffers
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request reverts commit 84c67c8, which refactored the handling of CUDA graph input buffers. The revert is motivated by a CI failure introduced by the original commit. The changes across all files consistently undo the refactoring, removing the ForwardInputBuffers base class and its subclasses, and restoring the previous buffer management logic. This appears to be a clean and complete revert, which is an appropriate action to address the CI breakage.

@Fridge003 Fridge003 merged commit 2472e47 into main Feb 23, 2026
53 of 61 checks passed
@Fridge003 Fridge003 deleted the revert branch February 23, 2026 05:09
xiaobaicxy added a commit to xiaobaicxy/sglang that referenced this pull request Feb 24, 2026
…o xverse_moe

* 'xverse_moe' of https://github.com/xiaobaicxy/sglang: (275 commits)
  fix: add missing blank line after docstring in serving_transcription.py (sgl-project#19206)
  Whisper model support & `/v1/audio/transcriptions` endpoint & benchmark (sgl-project#16983)
  fix: patch docker image fixes (sgl-project#19100)
  [PD-Disagg] Unify prefill info data transition flow, all with `PrefillServerInfo` (sgl-project#19195)
  [CI] Tiny enhance the dp attention load blance benchmark (sgl-project#19194)
  add new ci user (sgl-project#19133)
  [CI] fix the teardown output of disaggregation test (sgl-project#19193)
  [PD-Disagg] Support query dp rank from bootstrap server. (sgl-project#19168)
  [Kernel Slimming] Migrate AWQ marlin repack kernel to JIT (sgl-project#18949)
  [Diffusion] Match rotary_embedding module name style (sgl-project#19179)
  [Refactor] Split rotary_embedding.py into a modular package (sgl-project#19144)
  [NPU] bump sgl-kernel-npu to 2026.02.01.post2 (sgl-project#19178)
  Use single mma warp group for short q_len in FA to optimize decoding performance (sgl-project#18985)
  Reorganize topk logic to clean up code and expose logical experts (sgl-project#16945)
  [ROCm] Use unreg path for custom all-reduce during CUDA graph capture (sgl-project#19162)
  [diffusion] feat: detect Flux2 custom VAE path from component_paths (sgl-project#19170)
  [AMD] ENV flags tuning and cleanup (sgl-project#19176)
  Fix bench_one_batch_server by moving the print statements (sgl-project#19175)
  Update rocm7.2 Dockerfile to install amdsmi for QuickReduce Initialization (sgl-project#19091)
  Revert "Refactor graph input buffers (sgl-project#18991)" (sgl-project#19173)
  ...
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
sammysun0711 pushed a commit to sammysun0711/sglang that referenced this pull request Mar 20, 2026
sammysun0711 pushed a commit to sammysun0711/sglang that referenced this pull request Mar 20, 2026
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant