Skip to content

Fix quant kernel test errors and benchmark wrong output speeds#7604

Merged
zhyncs merged 24 commits intosgl-project:mainfrom
fzyzcjy:feat/refactor_quant_bench_test
Aug 21, 2025
Merged

Fix quant kernel test errors and benchmark wrong output speeds#7604
zhyncs merged 24 commits intosgl-project:mainfrom
fzyzcjy:feat/refactor_quant_bench_test

Conversation

@fzyzcjy
Copy link
Copy Markdown
Collaborator

@fzyzcjy fzyzcjy commented Jun 27, 2025

Motivation

also cleanup code a bit

note: depend on #7603

Modifications

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on enhancing the accuracy of quantization kernel benchmarks and improving the robustness of related tests. I've introduced a dedicated profiling utility, unified the API for 8-bit quantization, and refined the testing methodology for floating-point 8-bit tensors. These changes aim to provide more reliable performance metrics and ensure the correctness of quantization operations.

Highlights

  • Benchmarking Utilities: I've introduced a new bench_kineto utility in python/sglang/srt/bench_utils.py (lines 45-137). This function leverages torch.profiler.profile to accurately measure CUDA kernel execution times, including options for L2 cache flushing and output suppression. This directly addresses the 'benchmark wrong output speeds' issue.
  • Unified Quantization API: I've added new dispatcher functions, per_token_group_quant_8bit (lines 273-303) and sglang_per_token_group_quant_8bit (lines 362-393), to python/sglang/srt/layers/quantization/fp8_kernel.py. These functions now serve as a unified entry point for both FP8 and INT8 per-token group quantization, simplifying future integration and maintenance.
  • Robust FP8 Testing: To improve the reliability of quantization tests, I've added assert_fp8_all_close (lines 148-166) to python/sglang/srt/layers/quantization/utils.py. This utility provides a more nuanced comparison for FP8 tensors, checking for sign consistency and acceptable ULP differences, which is crucial for floating-point precision.
  • Benchmark and Test Refactoring: The benchmark script (sgl-kernel/benchmark/bench_per_token_group_quant_8bit.py) has been updated to utilize the new bench_kineto utility (lines 76-96) and now includes expanded test configurations for various quantization flags. Similarly, the test suite (sgl-kernel/tests/test_per_token_group_quant_8bit.py) has been refactored to use the new unified API and the assert_fp8_all_close function (line 82), removing redundant Triton kernel definitions and improving test coverage and accuracy.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces new functions for 8-bit quantization, including FP8 and INT8, and integrates them into the benchmarking and testing frameworks. The code includes optimizations for different hardware configurations and introduces helper functions for performance analysis. The review identifies areas for improvement in code maintainability, test coverage, and threshold configuration.

Comment on lines +58 to 60
if flags["scale_ue8m0"] and ((group_size != 128) or (hidden_dim % 512 != 0)):
pytest.skip()
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This conditional statement skips the test if scale_ue8m0 is true and either group_size is not 128 or hidden_dim is not divisible by 512. This could lead to incomplete testing of the functionality. It's important to ensure that all possible configurations are tested, or provide a clear justification for skipping certain configurations. Consider adding a comment explaining why these specific configurations are skipped, or modify the test to accommodate them.

Suggested change
if flags["scale_ue8m0"] and ((group_size != 128) or (hidden_dim % 512 != 0)):
pytest.skip()
return
if flags["scale_ue8m0"] and ((group_size != 128) or (hidden_dim % 512 != 0)):
pytest.skip("Skipping due to limitations with scale_ue8m0 and group/hidden dim")
return

Comment on lines +77 to +79
if flags["scale_ue8m0"] and group_size != 128:
return

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This conditional statement skips the benchmark if scale_ue8m0 is true and group_size is not 128. This could lead to incomplete benchmarking of the functionality. It's important to ensure that all possible configurations are benchmarked, or provide a clear justification for skipping certain configurations. Consider adding a comment explaining why these specific configurations are skipped, or modify the benchmark to accommodate them.

Suggested change
if flags["scale_ue8m0"] and group_size != 128:
return
if flags["scale_ue8m0"] and group_size != 128:
return # Skip, not supported

Comment on lines +148 to +166
def assert_fp8_all_close(a: torch.Tensor, b: torch.Tensor):
assert a.shape == b.shape
assert a.dtype == b.dtype == torch.float8_e4m3fn

a_u8 = a.view(torch.uint8)
b_u8 = b.view(torch.uint8)
diff_u8 = (a_u8.to(torch.int16) - b_u8.to(torch.int16)).abs()

numel = a.numel()

count_diff_sign = ((a_u8 >= 0) & (b_u8 < 0)).sum().item()
count_tiny_diff = (diff_u8 >= 1).sum().item()
count_large_diff = (diff_u8 >= 2).sum().item()

assert (
(count_diff_sign == 0)
and (count_tiny_diff / numel < 0.005)
and (count_large_diff == 0)
), f"{count_diff_sign=} {count_tiny_diff=} {count_large_diff=} {numel=}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assert_fp8_all_close function uses hardcoded thresholds (0.005 for tiny diff ratio and 2 for large diff). These values might not be appropriate for all scenarios and could lead to false positives or negatives. Consider making these thresholds configurable or deriving them based on the input data's characteristics to improve the robustness of the assertion.

Suggested change
def assert_fp8_all_close(a: torch.Tensor, b: torch.Tensor):
assert a.shape == b.shape
assert a.dtype == b.dtype == torch.float8_e4m3fn
a_u8 = a.view(torch.uint8)
b_u8 = b.view(torch.uint8)
diff_u8 = (a_u8.to(torch.int16) - b_u8.to(torch.int16)).abs()
numel = a.numel()
count_diff_sign = ((a_u8 >= 0) & (b_u8 < 0)).sum().item()
count_tiny_diff = (diff_u8 >= 1).sum().item()
count_large_diff = (diff_u8 >= 2).sum().item()
assert (
(count_diff_sign == 0)
and (count_tiny_diff / numel < 0.005)
and (count_large_diff == 0)
), f"{count_diff_sign=} {count_tiny_diff=} {count_large_diff=} {numel=}"
def assert_fp8_all_close(a: torch.Tensor, b: torch.Tensor, tiny_diff_ratio_threshold=0.005, large_diff_threshold=2):
assert a.shape == b.shape
assert a.dtype == b.dtype == torch.float8_e4m3fn
a_u8 = a.view(torch.uint8)
b_u8 = b.view(torch.uint8)
diff_u8 = (a_u8.to(torch.int16) - b_u8.to(torch.int16)).abs()
numel = a.numel()
count_diff_sign = ((a_u8 >= 0) & (b_u8 < 0)).sum().item()
count_tiny_diff = (diff_u8 >= 1).sum().item()
count_large_diff = (diff_u8 >= 2).sum().item()
assert (
(count_diff_sign == 0)
and (count_tiny_diff / numel < tiny_diff_ratio_threshold)
and (count_large_diff < large_diff_threshold)
), f"{count_diff_sign=} {count_tiny_diff=} {count_large_diff=} {numel=}"

# Conflicts:
#	python/sglang/srt/layers/quantization/fp8_kernel.py
#	sgl-kernel/benchmark/bench_per_token_group_quant_8bit.py
@fzyzcjy fzyzcjy requested a review from a team July 28, 2025 23:58
@fzyzcjy fzyzcjy requested a review from kushanam as a code owner July 29, 2025 01:57
Comment thread python/sglang/srt/layers/quantization/utils.py
@fzyzcjy fzyzcjy requested a review from Edwardf0t1 as a code owner August 14, 2025 13:40
@fzyzcjy
Copy link
Copy Markdown
Collaborator Author

fzyzcjy commented Aug 16, 2025

@Alcanderian could you please check again and see whether the PR can be approved or need further changes/discussions

@zhyncs zhyncs merged commit e85cb1c into sgl-project:main Aug 21, 2025
66 of 78 checks passed
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants