Skip to content

[deepseekv3.2] fix get_k_and_s_triton kenel for 128K seqlen case bug#19319

Merged
Fridge003 merged 7 commits intosgl-project:mainfrom
BJWang-ant:opt-get-topk-ragged
Mar 11, 2026
Merged

[deepseekv3.2] fix get_k_and_s_triton kenel for 128K seqlen case bug#19319
Fridge003 merged 7 commits intosgl-project:mainfrom
BJWang-ant:opt-get-topk-ragged

Conversation

@BJWang-ant
Copy link
Copy Markdown
Contributor

@BJWang-ant BJWang-ant commented Feb 25, 2026

Motivation

fix get_k_and_s_triton kenel for 128K seqlen case bug

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @BJWang-ant, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the get_k_and_s_triton kernel, specifically affecting sequence lengths of 128K. The changes involve refining memory access patterns and updating the kernel's signature to accommodate batched sequence lengths, resulting in improved performance and broader applicability.

Highlights

  • Bug Fix: Addresses an issue in the get_k_and_s_triton kernel that caused incorrect behavior for sequence lengths of 128K.
  • Performance Improvement: Optimizes memory access patterns within the Triton kernel to enhance the speed of K and S data gathering.
  • Code Modernization: Updates the kernel to handle batched sequence lengths, improving its flexibility and applicability to diverse workloads.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/attention/nsa/index_buf_accessor.py
    • Modified the triton method to accept seq_len_tensor, seq_len_sum, and max_seq_len as input parameters.
    • Updated the call to _get_k_and_s_triton with the new parameters.
  • python/sglang/srt/layers/attention/nsa/nsa_indexer.py
    • Modified _get_topk_ragged to pass seq_lens, seq_len_sum, and max_seq_len to get_index_k_scale_buffer.
    • Removed redundant code and assertions related to sequence lengths.
  • python/sglang/srt/mem_cache/memory_pool.py
    • Modified get_index_k_scale_buffer to accept seq_len_tensor, seq_len_sum, and max_seq_len as input parameters.
    • Updated the call to index_buf_accessor.GetKAndS.execute with the new parameters.
  • test/manual/layers/attention/nsa/test_get_k_scale_triton_kernel.py
    • Added a new test file to verify the correctness and performance of the _get_k_and_s_triton_kernel.
  • test/manual/layers/attention/nsa/test_index_buf_accessor.py
    • Modified the test cases to pass seq_len_tensor, seq_len, and batched page_indices to GetKAndS.triton.
Activity
  • The pull request author, BJWang-ant, has provided a detailed description of the motivation and modifications.
  • The checklist includes items for code formatting, unit tests, documentation, accuracy and speed benchmarks, and code style guidance.
  • The review process outlines the steps for merging the pull request, including approvals from CODEOWNERS and other reviewers, and triggering CI tests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@BJWang-ant
Copy link
Copy Markdown
Contributor Author

later I will provide a precision test report

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the get_k_and_s_triton kernel to support batched processing of sequences with varying lengths. This change fixes a bug for long sequences (e.g., 128K) by avoiding an excessively large Triton grid. The changes are consistently propagated through the call stack, and new tests have been added to verify the correctness of the new kernel, including a specific test for 128K sequence length. The overall change is well-implemented. I have one minor suggestion regarding a documentation inconsistency.

:param seq_len: int, number of tokens to gather
:param seq_lens: tensor of sequence lens, int64
:param seq_len_sum: sum of all sequence len, int32
:param seq_len_sum: max of sequence len, int32
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There appears to be a copy-paste error in the docstring. The parameter name here is seq_len_sum, but the description max of sequence len, int32 corresponds to the max_seq_len parameter. Please correct the parameter name to max_seq_len to avoid confusion.

Suggested change
:param seq_len_sum: max of sequence len, int32
:param max_seq_len: max of sequence len, int32

@BJWang-ant
Copy link
Copy Markdown
Contributor Author

BJWang-ant commented Mar 11, 2026

image this is gpqa datasets test precison result. @Fridge003

command is :
server:python -m sglang.launch_server --model-path /upfs/models/deepseek-ai/DeepSeek-V3.2/ --tp 8 --dp 8 --enable-dp-attention
test: python3 -m sglang.test.run_eval --port 30000 --eval-name gpqa --num-examples 198 --max-tokens 128000 --repeat 8 --thinking-mode deepseek-v3 --model /upfs/models/deepseek-ai/DeepSeek-V3.2

@Fridge003
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@Fridge003 Fridge003 merged commit 006bd44 into sgl-project:main Mar 11, 2026
168 of 180 checks passed
liubiyongge pushed a commit to liubiyongge/sglang that referenced this pull request Mar 13, 2026
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
JustinTong0323 pushed a commit to JustinTong0323/sglang that referenced this pull request Apr 7, 2026
yhyang201 pushed a commit to yhyang201/sglang that referenced this pull request Apr 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants