Fix quant kernel test errors and benchmark wrong output speeds#7604
Fix quant kernel test errors and benchmark wrong output speeds#7604zhyncs merged 24 commits intosgl-project:mainfrom
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request primarily focuses on enhancing the accuracy of quantization kernel benchmarks and improving the robustness of related tests. I've introduced a dedicated profiling utility, unified the API for 8-bit quantization, and refined the testing methodology for floating-point 8-bit tensors. These changes aim to provide more reliable performance metrics and ensure the correctness of quantization operations.
Highlights
- Benchmarking Utilities: I've introduced a new
bench_kinetoutility inpython/sglang/srt/bench_utils.py(lines 45-137). This function leveragestorch.profiler.profileto accurately measure CUDA kernel execution times, including options for L2 cache flushing and output suppression. This directly addresses the 'benchmark wrong output speeds' issue. - Unified Quantization API: I've added new dispatcher functions,
per_token_group_quant_8bit(lines 273-303) andsglang_per_token_group_quant_8bit(lines 362-393), topython/sglang/srt/layers/quantization/fp8_kernel.py. These functions now serve as a unified entry point for both FP8 and INT8 per-token group quantization, simplifying future integration and maintenance. - Robust FP8 Testing: To improve the reliability of quantization tests, I've added
assert_fp8_all_close(lines 148-166) topython/sglang/srt/layers/quantization/utils.py. This utility provides a more nuanced comparison for FP8 tensors, checking for sign consistency and acceptable ULP differences, which is crucial for floating-point precision. - Benchmark and Test Refactoring: The benchmark script (
sgl-kernel/benchmark/bench_per_token_group_quant_8bit.py) has been updated to utilize the newbench_kinetoutility (lines 76-96) and now includes expanded test configurations for various quantization flags. Similarly, the test suite (sgl-kernel/tests/test_per_token_group_quant_8bit.py) has been refactored to use the new unified API and theassert_fp8_all_closefunction (line 82), removing redundant Triton kernel definitions and improving test coverage and accuracy.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
The pull request introduces new functions for 8-bit quantization, including FP8 and INT8, and integrates them into the benchmarking and testing frameworks. The code includes optimizations for different hardware configurations and introduces helper functions for performance analysis. The review identifies areas for improvement in code maintainability, test coverage, and threshold configuration.
| if flags["scale_ue8m0"] and ((group_size != 128) or (hidden_dim % 512 != 0)): | ||
| pytest.skip() | ||
| return |
There was a problem hiding this comment.
This conditional statement skips the test if scale_ue8m0 is true and either group_size is not 128 or hidden_dim is not divisible by 512. This could lead to incomplete testing of the functionality. It's important to ensure that all possible configurations are tested, or provide a clear justification for skipping certain configurations. Consider adding a comment explaining why these specific configurations are skipped, or modify the test to accommodate them.
| if flags["scale_ue8m0"] and ((group_size != 128) or (hidden_dim % 512 != 0)): | |
| pytest.skip() | |
| return | |
| if flags["scale_ue8m0"] and ((group_size != 128) or (hidden_dim % 512 != 0)): | |
| pytest.skip("Skipping due to limitations with scale_ue8m0 and group/hidden dim") | |
| return |
| if flags["scale_ue8m0"] and group_size != 128: | ||
| return | ||
|
|
There was a problem hiding this comment.
This conditional statement skips the benchmark if scale_ue8m0 is true and group_size is not 128. This could lead to incomplete benchmarking of the functionality. It's important to ensure that all possible configurations are benchmarked, or provide a clear justification for skipping certain configurations. Consider adding a comment explaining why these specific configurations are skipped, or modify the benchmark to accommodate them.
| if flags["scale_ue8m0"] and group_size != 128: | |
| return | |
| if flags["scale_ue8m0"] and group_size != 128: | |
| return # Skip, not supported |
| def assert_fp8_all_close(a: torch.Tensor, b: torch.Tensor): | ||
| assert a.shape == b.shape | ||
| assert a.dtype == b.dtype == torch.float8_e4m3fn | ||
|
|
||
| a_u8 = a.view(torch.uint8) | ||
| b_u8 = b.view(torch.uint8) | ||
| diff_u8 = (a_u8.to(torch.int16) - b_u8.to(torch.int16)).abs() | ||
|
|
||
| numel = a.numel() | ||
|
|
||
| count_diff_sign = ((a_u8 >= 0) & (b_u8 < 0)).sum().item() | ||
| count_tiny_diff = (diff_u8 >= 1).sum().item() | ||
| count_large_diff = (diff_u8 >= 2).sum().item() | ||
|
|
||
| assert ( | ||
| (count_diff_sign == 0) | ||
| and (count_tiny_diff / numel < 0.005) | ||
| and (count_large_diff == 0) | ||
| ), f"{count_diff_sign=} {count_tiny_diff=} {count_large_diff=} {numel=}" |
There was a problem hiding this comment.
The assert_fp8_all_close function uses hardcoded thresholds (0.005 for tiny diff ratio and 2 for large diff). These values might not be appropriate for all scenarios and could lead to false positives or negatives. Consider making these thresholds configurable or deriving them based on the input data's characteristics to improve the robustness of the assertion.
| def assert_fp8_all_close(a: torch.Tensor, b: torch.Tensor): | |
| assert a.shape == b.shape | |
| assert a.dtype == b.dtype == torch.float8_e4m3fn | |
| a_u8 = a.view(torch.uint8) | |
| b_u8 = b.view(torch.uint8) | |
| diff_u8 = (a_u8.to(torch.int16) - b_u8.to(torch.int16)).abs() | |
| numel = a.numel() | |
| count_diff_sign = ((a_u8 >= 0) & (b_u8 < 0)).sum().item() | |
| count_tiny_diff = (diff_u8 >= 1).sum().item() | |
| count_large_diff = (diff_u8 >= 2).sum().item() | |
| assert ( | |
| (count_diff_sign == 0) | |
| and (count_tiny_diff / numel < 0.005) | |
| and (count_large_diff == 0) | |
| ), f"{count_diff_sign=} {count_tiny_diff=} {count_large_diff=} {numel=}" | |
| def assert_fp8_all_close(a: torch.Tensor, b: torch.Tensor, tiny_diff_ratio_threshold=0.005, large_diff_threshold=2): | |
| assert a.shape == b.shape | |
| assert a.dtype == b.dtype == torch.float8_e4m3fn | |
| a_u8 = a.view(torch.uint8) | |
| b_u8 = b.view(torch.uint8) | |
| diff_u8 = (a_u8.to(torch.int16) - b_u8.to(torch.int16)).abs() | |
| numel = a.numel() | |
| count_diff_sign = ((a_u8 >= 0) & (b_u8 < 0)).sum().item() | |
| count_tiny_diff = (diff_u8 >= 1).sum().item() | |
| count_large_diff = (diff_u8 >= 2).sum().item() | |
| assert ( | |
| (count_diff_sign == 0) | |
| and (count_tiny_diff / numel < tiny_diff_ratio_threshold) | |
| and (count_large_diff < large_diff_threshold) | |
| ), f"{count_diff_sign=} {count_tiny_diff=} {count_large_diff=} {numel=}" |
# Conflicts: # python/sglang/srt/layers/quantization/fp8_kernel.py # sgl-kernel/benchmark/bench_per_token_group_quant_8bit.py
…zyzcjy/sglang into feat/refactor_quant_bench_test
|
@Alcanderian could you please check again and see whether the PR can be approved or need further changes/discussions |
Motivation
also cleanup code a bit
note: depend on #7603
Modifications
Checklist