Fix MXFP8 fc1 weight shape check for non-gated activations#2753
Fix MXFP8 fc1 weight shape check for non-gated activations#2753he-yufeng wants to merge 3 commits intoflashinfer-ai:mainfrom
Conversation
The fc1_weight_block shape check hardcodes a factor of 2 for the N dimension (inter_size * 2), assuming gated activation where gate and up weights are concatenated. For non-gated activations the factor should be 1. Use isGatedActivation(base_activation_type) to select the correct multiplier. This matches the existing pattern at line 1131 (NVFP4 path) which already guards the * 2 with an isGatedActivation check. Fixes flashinfer-ai#2731
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves an issue in the MXFP8 quantization path where the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughReplaced hard-coded 2× multiplier with a conditional Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request correctly fixes a bug in the MXFP8 quantization path where the fc1 weight shape check failed for non-gated activations due to a hardcoded multiplier. The change introduces a dynamic multiplier based on the activation type, which is a clear improvement. My review includes suggestions to refactor the duplicated logic for calculating this multiplier across three different locations. This will enhance code readability and maintainability.
| (isGatedActivation(base_activation_type) ? 2 : 1) && | ||
| fc1_weight_block.size(2) * FP8_PER_INT32 * | ||
| TmaWarpSpecializedGroupedGemmInput::MXFPXBlockScaleVectorSize == | ||
| TmaWarpSpecializedGroupedGemmInput::alignToSfDim( | ||
| hidden_size, TmaWarpSpecializedGroupedGemmInput::MinKDimAlignmentMXFPX)) | ||
| << "fc1 weight block size must be (num_experts_on_rank, inter_size * 2, hidden_size // 4 " | ||
| "// block_scale_vector_size)"; | ||
| << "fc1 weight block size must be (num_experts_on_rank, inter_size * " | ||
| << (isGatedActivation(base_activation_type) ? 2 : 1) << ", hidden_size // 4" | ||
| " // block_scale_vector_size)"; |
There was a problem hiding this comment.
The expression (isGatedActivation(base_activation_type) ? 2 : 1) is repeated within this check. To improve readability and avoid code duplication, consider calculating this multiplier once and storing it in a local variable before the TVM_FFI_ICHECK. This would also align with the pattern used for isNvfp4Quant (around line 1134), which uses an if/else block to handle this logic more cleanly.
| (isGatedActivation(base_activation_type) ? 2 : 1) && | ||
| fc1_weight_block.size(2) * FP8_PER_INT32 * | ||
| TmaWarpSpecializedGroupedGemmInput::MXFPXBlockScaleVectorSize == | ||
| TmaWarpSpecializedGroupedGemmInput::alignToSfDim( | ||
| hidden_size, TmaWarpSpecializedGroupedGemmInput::MinKDimAlignmentMXFPX)) | ||
| << "fc1 weight block size must be (num_experts_on_rank, inter_size * 2, hidden_size // 4 " | ||
| "// block_scale_vector_size)"; | ||
| << "fc1 weight block size must be (num_experts_on_rank, inter_size * " | ||
| << (isGatedActivation(base_activation_type) ? 2 : 1) << ", hidden_size // 4" | ||
| " // block_scale_vector_size)"; |
There was a problem hiding this comment.
The expression (isGatedActivation(base_activation_type) ? 2 : 1) is repeated within this check. To improve readability and avoid code duplication, consider calculating this multiplier once and storing it in a local variable before the TVM_FFI_ICHECK. This would also align with the pattern used for isNvfp4Quant (around line 1134), which uses an if/else block to handle this logic more cleanly.
| (isGatedActivation(base_activation_type) ? 2 : 1) && | ||
| fc1_weight_block.size(2) * FP8_PER_INT32 * | ||
| TmaWarpSpecializedGroupedGemmInput::MXFPXBlockScaleVectorSize == | ||
| TmaWarpSpecializedGroupedGemmInput::alignToSfDim( | ||
| hidden_size, TmaWarpSpecializedGroupedGemmInput::MinKDimAlignmentMXFPX)) | ||
| << "fc1 weight block size must be (num_experts_on_rank, inter_size * 2, hidden_size // 4 " | ||
| "// block_scale_vector_size)"; | ||
| << "fc1 weight block size must be (num_experts_on_rank, inter_size * " | ||
| << (isGatedActivation(base_activation_type) ? 2 : 1) << ", hidden_size // 4" | ||
| " // block_scale_vector_size)"; |
There was a problem hiding this comment.
The expression (isGatedActivation(base_activation_type) ? 2 : 1) is repeated within this check. To improve readability and avoid code duplication, consider calculating this multiplier once and storing it in a local variable before the TVM_FFI_ICHECK. This would also align with the pattern used for isNvfp4Quant (around line 1134), which uses an if/else block to handle this logic more cleanly.
|
/bot run |
|
[FAILED] Pipeline #46554074: 14/20 passed |
|
looks like this PR is doing the same as this one |
|
ok to close as dup tho? thanks for the contrib |
) Fixes #2731. ## What's broken? When using the CUTLASS fused MoE backend with **non-gated activations** (e.g., Relu2, Gelu, Silu) and MXFP8 quantization, the fc1 weight shape validation unconditionally rejects the input — even when the shape is correct. ## Who is affected? Anyone using the **CUTLASS fused MoE** path with: - **Quantization**: `WMxfp8AMxfp8`, `WMxfp4AFp8`, or `WMxfp4AMxfp8` - **Activation**: any non-gated type (Relu2, Gelu, Silu, etc.) Not affected: gated activations (Swiglu, Geglu, SwigluBias), or other quant modes (NVFP4 already handles this correctly). ## Where is the bug? `csrc/fused_moe/cutlass_backend/flashinfer_cutlass_fused_moe_binding.cu`, inside `getQuantParams()` — the fc1 weight block N-dimension check hardcodes `* 2` at three MXFP8 branches (~L898, ~L1004, ~L1063). ## Why does it happen? PR #2581 introduced MXFP8 support when only gated activations (Swiglu) existed, so `inter_size * 2` was correct. Later, non-gated activation support was added to the trtllm-gen backend (PR #2707), but the CUTLASS backend's validation was never updated. The NVFP4 path in the same file (line ~1131) already handles this correctly with an `if (isGatedActivation(...))` guard. ## How did we fix it? For each of the 3 MXFP8 quant branches: 1. Extract `int const fc1_n_mult = isGatedActivation(base_activation_type) ? 2 : 1;` 2. Replace the hardcoded `* 2` with `* fc1_n_mult` 3. Update error messages: gated shows `"inter_size * 2"`, non-gated shows `"inter_size"` **Before:** ```cpp fc1_weight_block.size(1) == alignToSfDim(inter_size, ...) * 2 ``` **After:** ```cpp int const fc1_n_mult = isGatedActivation(base_activation_type) ? 2 : 1; fc1_weight_block.size(1) == alignToSfDim(inter_size, ...) * fc1_n_mult ``` ## How do we know it works? - `pre-commit run` passes (clang-format, lint, etc.) - Gated activations (default Swiglu): `fc1_n_mult = 2` — identical to old behavior, no regression - Non-gated activations: `fc1_n_mult = 1` — shape check now accepts correct `inter_size` dimension - Full GPU test suite requires CI (`@flashinfer-bot run`) ## Related - Builds on the approach identified in #2753 (stale ~27 days, CI unresolved). - Addresses the Gemini review feedback from #2753 by extracting the multiplier to a local variable before the validation checks. cc @aleozlx @nv-yunzheq <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Bug Fixes** * Fixed weight block size validation for Mixture of Experts (MOE) to correctly handle both gated and non-gated activation types, ensuring proper support across different activation configurations. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: Yiyang Liu <37043548+ianliuy@users.noreply.github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Fixes #2731.
The MXFP8 quantization path in
getQuantParams()hardcodesinter_size * 2for the fc1 weight block N-dimension check, assuming gated activations (where gate + up weights are concatenated). This causes the check to fail for non-gated activations where the multiplier should be 1.Replace the hardcoded
* 2with* (isGatedActivation(base_activation_type) ? 2 : 1)in all three MXFP8getQuantParamsoverloads. Also updated the error messages to show the actual expected multiplier.This matches the existing pattern in the NVFP4 path (line ~1131) which already correctly guards the
* 2with anisGatedActivationcheck.Summary by CodeRabbit