Skip to content

bugfix: fixed cutlass fused moe usage of FP4QuantizationSFLayout::SWIZZLED#1371

Merged
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
yzh119:bugfix-cutlass-fused-moe-swizzled
Aug 3, 2025
Merged

bugfix: fixed cutlass fused moe usage of FP4QuantizationSFLayout::SWIZZLED#1371
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
yzh119:bugfix-cutlass-fused-moe-swizzled

Conversation

@yzh119
Copy link
Copy Markdown
Collaborator

@yzh119 yzh119 commented Aug 3, 2025

📌 Description

cutlass fused moe modules are broken after #1355 because the structure of FP4QuantizationSFLayout has changed. This PR fixes the issue.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

cc @wenscarl @ttyio

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yzh119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've reviewed this pull request, which aims to restore the functionality of cutlass fused moe modules. The issue stemmed from an incompatibility with a recent update to the FP4QuantizationSFLayout enum. This PR resolves the problem by adapting the code to use the correct, updated enum member, ensuring the modules operate as intended.

Highlights

  • Bug Fix: This pull request addresses a regression that caused cutlass fused moe modules to break following a recent change in the FP4QuantizationSFLayout structure.
  • Code Adaptation: The core change involves updating the usage of FP4QuantizationSFLayout::SWIZZLED to the more specific FP4QuantizationSFLayout::SWIZZLED_128x4 in relevant CUDA kernel functions to align with the updated data structure.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug in cutlass fused moe modules related to changes in the FP4QuantizationSFLayout structure. The fix involves replacing FP4QuantizationSFLayout::SWIZZLED with FP4QuantizationSFLayout::SWIZZLED_128x4. To improve the code's robustness, I've suggested using conditional logic based on template parameters instead of hardcoding the layout.

Comment on lines 983 to +986
auto sf_out = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF,
NumThreadsPerSF, VecSize>(
std::nullopt /* batchIdx */, token_id - num_tokens_before_expert, elem_idx,
std::nullopt /* numRows */, num_cols, act_sf_expert, FP4QuantizationSFLayout::SWIZZLED);
std::nullopt /* numRows */, num_cols, act_sf_expert, FP4QuantizationSFLayout::SWIZZLED_128x4);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The layout FP4QuantizationSFLayout::SWIZZLED_128x4 is hardcoded1. While this fixes the immediate bug, it might introduce brittleness. If this templated function quantizePackedFPXValue is ever instantiated with a configuration that requires a different swizzled layout (e.g., for a different VecSize), this will fail.

To improve robustness, consider using if constexpr to select the appropriate layout based on VecSize.

auto sf_out = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF,
                                                   NumThreadsPerSF, VecSize>(
      std::nullopt /* batchIdx */, token_id - num_tokens_before_expert, elem_idx,
      std::nullopt /* numRows */, num_cols, act_sf_expert,
      (VecSize == 128) ? FP4QuantizationSFLayout::SWIZZLED_128x4 : FP4QuantizationSFLayout::SWIZZLED);

Style Guide References

Footnotes

  1. Avoid hardcoding values. Use conditional logic based on template parameters to select appropriate values.

Comment on lines 1023 to +1026
auto sf_out = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF,
NumThreadsPerSF, VecSize>(
std::nullopt /* batchIdx */, token_id - num_tokens_before_expert, elem_idx,
std::nullopt /* numRows */, num_cols, act_sf_expert, FP4QuantizationSFLayout::SWIZZLED);
std::nullopt /* numRows */, num_cols, act_sf_expert, FP4QuantizationSFLayout::SWIZZLED_128x4);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The layout FP4QuantizationSFLayout::SWIZZLED_128x4 is hardcoded1. While this fixes the immediate bug, it might introduce brittleness. If this templated function writeSF is ever instantiated with a configuration that requires a different swizzled layout (e.g., for a different VecSize), this will fail.

To improve robustness, consider using if constexpr to select the appropriate layout based on VecSize.

 auto sf_out = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF,
                                                   NumThreadsPerSF, VecSize>(
      std::nullopt /* batchIdx */, token_id - num_tokens_before_expert, elem_idx,
      std::nullopt /* numRows */, num_cols, act_sf_expert,
       (VecSize == 128) ? FP4QuantizationSFLayout::SWIZZLED_128x4 : FP4QuantizationSFLayout::SWIZZLED);

Style Guide References

Footnotes

  1. Avoid hardcoding values. Use conditional logic based on template parameters to select appropriate values.

Comment on lines 1031 to +1034
NumThreadsPerSF, VecSize>(
std::nullopt /* batchIdx */, source_token_id, elem_idx, std::nullopt /* numRows */,
num_cols, const_cast<TmaWarpSpecializedGroupedGemmInput::ElementSF*>(input_sf),
FP4QuantizationSFLayout::SWIZZLED);
FP4QuantizationSFLayout::SWIZZLED_128x4);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The layout FP4QuantizationSFLayout::SWIZZLED_128x4 is hardcoded1. While this fixes the immediate bug, it might introduce brittleness. If this templated function sf_in is ever instantiated with a configuration that requires a different swizzled layout (e.g., for a different VecSize), this will fail.

To improve robustness, consider using if constexpr to select the appropriate layout based on VecSize.

 auto const sf_in =
          cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF,
                                             NumThreadsPerSF, VecSize>(
              std::nullopt /* batchIdx */, source_token_id, elem_idx, std::nullopt /* numRows */,
              num_cols, const_cast<TmaWarpSpecializedGroupedGemmInput::ElementSF*>(input_sf),
              (VecSize == 128) ? FP4QuantizationSFLayout::SWIZZLED_128x4 : FP4QuantizationSFLayout::SWIZZLED);

Style Guide References

Footnotes

  1. Avoid hardcoding values. Use conditional logic based on template parameters to select appropriate values.

@yzh119 yzh119 merged commit 22a62ea into flashinfer-ai:main Aug 3, 2025
2 checks passed
yzh119 added a commit that referenced this pull request Aug 21, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

Currently comm modules and moe modules are not compiled in aot
unittests, which results in lots of issues such as #1371 #1410, and
#1530.
This PR adds these modules to aot UT to make sure we capture the
potential compilation error of these modules.

#1531 (a bugfix) changes were ported into this PR to test together.

## 🔍 Related Issues

#1371, #1410, #1530, #1531

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

cc @nvpohanh @weireweire

---------

Co-authored-by: Yaxing Cai <yaxingc@nvidia.com>
Co-authored-by: Zihao <zihaoy@nvidia.com>
Co-authored-by: Zihao Ye <expye@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant