fix: B200 uses E8M0 weight scale correctly for inference.#13067
fix: B200 uses E8M0 weight scale correctly for inference.#13067fy1214 wants to merge 6 commits intosgl-project:mainfrom
Conversation
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale. # 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not # 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale. # 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale. # 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not # 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale. # 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
Summary of ChangesHello @fy1214, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical issue where DeepGEMM operations on Blackwell GPUs would produce NaN values when using FP32 scales. The solution involves enforcing the use of E8M0 scales for DeepGEMM on Blackwell by introducing a new check and a weight requantization step during model loading. This ensures numerical stability and correct inference on the Blackwell architecture. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly implements support for E8M0 weight scales on Blackwell architecture for DeepGEMM, which is necessary to avoid NaN issues. The changes involve adding a check for Blackwell GPUs, performing in-place weight requantization, and updating the corresponding test case. The implementation logic is sound. I've provided a couple of suggestions to improve code clarity and remove redundancy, which will enhance maintainability.
| ) | ||
| return | ||
| else: | ||
| if deepgemm_scale_ue8m0_supported(): |
There was a problem hiding this comment.
We can only do this requant when deepgemm is used on blackwell and the layer has attribute weight_block_size
Ref: https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/deepseek_v2.py#L3305
Also will the logic here conflict the requant operations written in deepseek_v2.py?
There was a problem hiding this comment.
Or we can move the util functions in deepseek_v2 here. This makes more sense to me
https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/deepseek_v2.py#L3405
There was a problem hiding this comment.
fix it with five condition check
if (
deep_gemm_wrapper.ENABLE_JIT_DEEPGEMM
and deep_gemm_wrapper.DEEPGEMM_SCALE_UE8M0
and hasattr(self.quant_config, "weight_block_size")
and self.quant_config.weight_block_size is not None
and self.w8a8_block_fp8_linear
is deepgemm_w8a8_block_fp8_linear_with_fallback
)
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale. # 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not # 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale. # 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale. # 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not # 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale. # 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale. # 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not # 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale. # 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
…ocess_weights_after_loading
|
Hi @fy1214 @Fridge003 , are we moving this PR forward? ^ ^ |
Hi @samuellees @fy1214, since this PR might affect some model's behavior (especially dpsk models), I need to take over and manually do some changes |
sure, pls let me know if you need any help. |
Get, thanks a lot! |
|
@samuellees @fy1214 Moved the changes to #13601 |
Motivation
Reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.
Why cause this issue:
Becase in DeepGEMM blackwell implementation code
smxx_layout.cuh:These code will cause problem when the scale values are not round to E8M0.
Modifications
Accuracy Tests
prompt
before


after
testcase


before
after
benchmark/gsm8k/bench_sglang.py


before
after