Skip to content

fix the server_args condition#11461

Merged
zhyncs merged 3 commits intosgl-project:bhe/1_stage_triton_kernelfrom
zminglei:triton
Oct 11, 2025
Merged

fix the server_args condition#11461
zhyncs merged 3 commits intosgl-project:bhe/1_stage_triton_kernelfrom
zminglei:triton

Conversation

@zminglei
Copy link
Copy Markdown
Collaborator

@zminglei zminglei commented Oct 11, 2025

Motivation

fix the server_args condition
fix lint

Modifications

fix the server_args condition

Accuracy Tests

Qwen3-8B TP4

python3 -m sglang.launch_server --model-path /shared/public/elr-models/Qwen/Qwen3-8B/2069b3fae1114555f3c020c81410e51fa0f656f2 --attention-backend triton --enable-deterministic-inference --tp 4

python3 -m sglang.test.test_deterministic --test-mode prefix --n-trials 50

Prompt 0 with prefix length 1: total samples: 287, Unique samples: 1
Prompt 1 with prefix length 511: total samples: 322, Unique samples: 1
Prompt 2 with prefix length 2048: total samples: 312, Unique samples: 1
Prompt 3 with prefix length 4097: total samples: 354, Unique samples: 1

python benchmark/gsm8k/bench_sglang.py --data-path /shared/public/data/gsm8k/test.jsonl

Accuracy: 0.960
Invalid: 0.000
Latency: 14.965 s
Output throughput: 1603.849 token/s

Qwen3-30B-A3B TP4

python3 -m sglang.launch_server --model-path /shared/public/elr-models/Qwen/Qwen3-30B-A3B/67b0e0ca24de1b8cedea4c97f1925df66d72bee1/ --attention-backend triton --enable-deterministic-inference --tp 4

python3 -m sglang.test.test_deterministic --test-mode prefix --n-trials 50

Prompt 0 with prefix length 1: total samples: 323, Unique samples: 1
Prompt 1 with prefix length 511: total samples: 306, Unique samples: 1
Prompt 2 with prefix length 2048: total samples: 335, Unique samples: 1
Prompt 3 with prefix length 4097: total samples: 311, Unique samples: 1

python benchmark/gsm8k/bench_sglang.py --data-path /shared/public/data/gsm8k/test.jsonl

Accuracy: 0.910
Invalid: 0.000
Latency: 20.638 s
Output throughput: 1140.044 token/s

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zminglei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the server argument handling, specifically addressing the conditions under which radix cache is disabled for deterministic inference. It expands the compatibility list for radix cache to include the 'triton' attention backend, ensuring that systems utilizing Triton can leverage radix cache without it being inadvertently turned off. This change improves the accuracy of configuration for various attention backend setups.

Highlights

  • Radix Cache Compatibility: The condition for disabling radix cache during deterministic inference has been updated to include the 'triton' attention backend as compatible. Previously, only 'fa3' was explicitly recognized.
  • Server Argument Logic: The server argument parsing logic in server_args.py now correctly identifies 'triton' as an attention backend that supports radix cache, preventing its unnecessary disabling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes the logic for handling deterministic inference with different attention backends. It adds 'triton' to the list of backends that support radix cache and ensures that radix cache is properly disabled for unsupported backends by uncommenting self.disable_radix_cache = True. The change is correct and improves the robustness of the server configuration.

if self.attention_backend != "fa3":
# self.disable_radix_cache = True
# Currently, only FA3, Triton supports radix cache. Support for other backends is in progress
if self.attention_backend not in ["fa3", "triton"]:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For improved readability and maintainability, it would be beneficial to extract the hardcoded list ["fa3", "triton"] into a named constant, for example DETERMINISTIC_RADIX_CACHE_BACKENDS, defined alongside other similar constants at the top of the file. This would make the code cleaner and easier to update when more backends add support for this feature.

@zhyncs zhyncs merged commit c4af1f8 into sgl-project:bhe/1_stage_triton_kernel Oct 11, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants