Skip to content

Reorganize topk logic to clean up code and expose logical experts#16945

Merged
ch-wan merged 9 commits intosgl-project:mainfrom
ocss884:improve_select_expert_logic
Feb 23, 2026
Merged

Reorganize topk logic to clean up code and expose logical experts#16945
ch-wan merged 9 commits intosgl-project:mainfrom
ocss884:improve_select_expert_logic

Conversation

@ocss884
Copy link
Copy Markdown
Collaborator

@ocss884 ocss884 commented Jan 12, 2026

Motivation

Reorganize the topk processing logic to unify a place of processing logical experts. I'm case which makes #12162 compatible with eplb without adding the capture logic everywhere. This PR also clean-up the duplicated eplb and padding code right after each topk kernel.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ocss884, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the Mixture-of-Experts (MoE) top-k selection mechanism by reorganizing the post-processing steps for expert IDs. By introducing a dedicated helper function, the core fused_topk and grouped_topk functions are simplified, leading to cleaner code and a more modular approach to managing expert dispatch logic, including handling of padded tokens and shared experts.

Highlights

  • Refactored Top-K Logic: Centralized the post-processing of topk_ids into a new helper function _post_prepare_topk_ids to improve code organization and modularity.
  • Simplified Function Signatures: Removed num_token_non_padded and expert_location_dispatch_info parameters from the signatures of various fused_topk and grouped_topk functions, streamlining their interfaces.
  • Explicit Expert Handling: The new _post_prepare_topk_ids function now explicitly handles the conversion of logical to physical expert IDs, masking of padded regions, and appending of shared experts.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the top-k logic by moving post-processing steps like logical-to-physical expert ID mapping, padding masking, and shared expert appending into a new helper function _post_prepare_topk_ids. This is a good change for code clarity and organization.

However, I've found two critical bugs introduced by this refactoring:

  1. The updated topk_weights tensor is lost after being modified in _post_prepare_topk_ids.
  2. The get_global_experts_capturer().capture() is called with topk_ids before it is fully processed, which is a change in behavior from the original code.

I have provided detailed comments and suggestions to fix these issues. Please address them.

Comment thread python/sglang/srt/layers/moe/topk.py Outdated
Comment thread python/sglang/srt/layers/moe/topk.py Outdated
@ocss884
Copy link
Copy Markdown
Collaborator Author

ocss884 commented Jan 12, 2026

/tag-and-rerun-ci

)
if _is_cuda:
topk_ids = topk_ids_logical_to_physical(topk_ids, expert_location_dispatch_info)
_mask_topk_ids_padded_region(topk_ids, num_token_non_padded)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq: does this mean this will launch a kernel while this should be fused in many cases

Copy link
Copy Markdown
Collaborator

@ch-wan ch-wan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. This PR may break future optimizations that fuse topk computation with -1 padding. We can have another refactor that returns if fusion is supported for each topk kernel.

@ocss884 ocss884 changed the title Reorgnize topk logic to clean up code and expose logical experts Reorganize topk logic to clean up code and expose logical experts Jan 13, 2026
@ocss884
Copy link
Copy Markdown
Collaborator Author

ocss884 commented Jan 14, 2026

/tag-and-rerun-ci

@ocss884
Copy link
Copy Markdown
Collaborator Author

ocss884 commented Feb 4, 2026

/tag-and-rerun-ci

@ocss884
Copy link
Copy Markdown
Collaborator Author

ocss884 commented Feb 23, 2026

/tag-and-rerun-ci

1 similar comment
@ocss884
Copy link
Copy Markdown
Collaborator Author

ocss884 commented Feb 23, 2026

/tag-and-rerun-ci

@ch-wan ch-wan merged commit 6448de7 into sgl-project:main Feb 23, 2026
264 of 278 checks passed
xiaobaicxy added a commit to xiaobaicxy/sglang that referenced this pull request Feb 24, 2026
…o xverse_moe

* 'xverse_moe' of https://github.com/xiaobaicxy/sglang: (275 commits)
  fix: add missing blank line after docstring in serving_transcription.py (sgl-project#19206)
  Whisper model support & `/v1/audio/transcriptions` endpoint & benchmark (sgl-project#16983)
  fix: patch docker image fixes (sgl-project#19100)
  [PD-Disagg] Unify prefill info data transition flow, all with `PrefillServerInfo` (sgl-project#19195)
  [CI] Tiny enhance the dp attention load blance benchmark (sgl-project#19194)
  add new ci user (sgl-project#19133)
  [CI] fix the teardown output of disaggregation test (sgl-project#19193)
  [PD-Disagg] Support query dp rank from bootstrap server. (sgl-project#19168)
  [Kernel Slimming] Migrate AWQ marlin repack kernel to JIT (sgl-project#18949)
  [Diffusion] Match rotary_embedding module name style (sgl-project#19179)
  [Refactor] Split rotary_embedding.py into a modular package (sgl-project#19144)
  [NPU] bump sgl-kernel-npu to 2026.02.01.post2 (sgl-project#19178)
  Use single mma warp group for short q_len in FA to optimize decoding performance (sgl-project#18985)
  Reorganize topk logic to clean up code and expose logical experts (sgl-project#16945)
  [ROCm] Use unreg path for custom all-reduce during CUDA graph capture (sgl-project#19162)
  [diffusion] feat: detect Flux2 custom VAE path from component_paths (sgl-project#19170)
  [AMD] ENV flags tuning and cleanup (sgl-project#19176)
  Fix bench_one_batch_server by moving the print statements (sgl-project#19175)
  Update rocm7.2 Dockerfile to install amdsmi for QuickReduce Initialization (sgl-project#19091)
  Revert "Refactor graph input buffers (sgl-project#18991)" (sgl-project#19173)
  ...
nvcastet added a commit to nvcastet/sglang that referenced this pull request Mar 31, 2026
PR sgl-project#16945 refactored topk postprocessing into `_post_process_topk_ids`
but inlined the `topk_ids_logical_to_physical` and
`_mask_topk_ids_padded_region` calls instead of delegating to the
existing `@torch.compile`-decorated `_biased_grouped_topk_postprocess`.

This caused those two operations to run as separate eager kernels
instead of being fused by torch.compile, a regression for CUDA paths
using expert-parallel / EPLB.

Fix: call `_biased_grouped_topk_postprocess` (which already carries
`@torch.compile(dynamic=True)`) from within `_post_process_topk_ids`,
restoring the compiled kernel fusion.

Ref: sgl-project#16945 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants