Skip to content

fix: B200 uses E8M0 weight scale correctly for inference.#13067

Closed
fy1214 wants to merge 6 commits intosgl-project:mainfrom
fy1214:b200-fix
Closed

fix: B200 uses E8M0 weight scale correctly for inference.#13067
fy1214 wants to merge 6 commits intosgl-project:mainfrom
fy1214:b200-fix

Conversation

@fy1214
Copy link
Copy Markdown
Collaborator

@fy1214 fy1214 commented Nov 11, 2025

Motivation

Reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.

Why cause this issue:
Becase in DeepGEMM blackwell implementation code smxx_layout.cuh:

    // Pack and store
    uint32_t packed = 0;
    packed |= (values[0] >> 23u);
    packed |= (values[1] >> 15u);
    packed |= (values[2] >>  7u);
    packed |= (values[3] <<  1u);

These code will cause problem when the scale values are not round to E8M0.

Modifications

  1. Add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale.
  2. Fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct

Accuracy Tests

prompt

before
image
after
image

testcase
before
image
after
image

benchmark/gsm8k/bench_sglang.py
before
image
after
image

reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.

# 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not
# 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale.
# 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.

# 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not
# 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale.
# 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @fy1214, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical issue where DeepGEMM operations on Blackwell GPUs would produce NaN values when using FP32 scales. The solution involves enforcing the use of E8M0 scales for DeepGEMM on Blackwell by introducing a new check and a weight requantization step during model loading. This ensures numerical stability and correct inference on the Blackwell architecture.

Highlights

  • Blackwell DeepGEMM Scale Enforcement: Introduced a mechanism to ensure DeepGEMM on Blackwell architecture uses E8M0 scales to prevent NaN issues, replacing FP32 scales.
  • Weight Requantization: Implemented requant_weight_ue8m0_inplace to dynamically requantize weights and their scales to E8M0 during the loading process, specifically for Blackwell.
  • Test Case Correction: Updated the test_block_fp8_deep_gemm_blackwell test to correctly pass quantized tensors and scales, aligning with the new E8M0 scaling requirements.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly implements support for E8M0 weight scales on Blackwell architecture for DeepGEMM, which is necessary to avoid NaN issues. The changes involve adding a check for Blackwell GPUs, performing in-place weight requantization, and updating the corresponding test case. The implementation logic is sound. I've provided a couple of suggestions to improve code clarity and remove redundancy, which will enhance maintainability.

Comment thread python/sglang/srt/layers/quantization/fp8.py
Comment thread python/sglang/srt/layers/quantization/fp8_utils.py Outdated
)
return
else:
if deepgemm_scale_ue8m0_supported():
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can only do this requant when deepgemm is used on blackwell and the layer has attribute weight_block_size
Ref: https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/deepseek_v2.py#L3305

Also will the logic here conflict the requant operations written in deepseek_v2.py?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or we can move the util functions in deepseek_v2 here. This makes more sense to me
https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/deepseek_v2.py#L3405

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix it with five condition check

if (
      deep_gemm_wrapper.ENABLE_JIT_DEEPGEMM
      and deep_gemm_wrapper.DEEPGEMM_SCALE_UE8M0
      and hasattr(self.quant_config, "weight_block_size")
      and self.quant_config.weight_block_size is not None
      and self.w8a8_block_fp8_linear
      is deepgemm_w8a8_block_fp8_linear_with_fallback
  )

reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.

# 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not
# 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale.
# 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.

# 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not
# 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale.
# 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
reason for this feature: DeepGEMM in blackwell arc can't use fp32 scale because it will cause nan. Only can use E8M0 scale.

# 1. add deepgemm_scale_ue8m0_supported method to checkout whether using is_blackwell or not
# 2. add requant_weight_ue8m0_inplace in process_weights_after_loading method to requantize the weight and scale.
# 3. fix the test_block_fp8_deep_gemm_blackwell test case scale for test correct
@samuellees
Copy link
Copy Markdown
Contributor

Hi @fy1214 @Fridge003 , are we moving this PR forward? ^ ^

@b8zhong b8zhong added the run-ci label Nov 18, 2025
@Fridge003
Copy link
Copy Markdown
Collaborator

Hi @fy1214 @Fridge003 , are we moving this PR forward? ^ ^

Hi @samuellees @fy1214, since this PR might affect some model's behavior (especially dpsk models), I need to take over and manually do some changes

@fy1214
Copy link
Copy Markdown
Collaborator Author

fy1214 commented Nov 18, 2025

Hi @fy1214 @Fridge003 , are we moving this PR forward? ^ ^

Hi @samuellees @fy1214, since this PR might affect some model's behavior (especially dpsk models), I need to take over and manually do some changes

sure, pls let me know if you need any help.

@samuellees
Copy link
Copy Markdown
Contributor

Hi @fy1214 @Fridge003 , are we moving this PR forward? ^ ^

Hi @samuellees @fy1214, since this PR might affect some model's behavior (especially dpsk models), I need to take over and manually do some changes

Get, thanks a lot!

@Fridge003
Copy link
Copy Markdown
Collaborator

@samuellees @fy1214 Moved the changes to #13601

@Fridge003 Fridge003 closed this Nov 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants