Skip to content

Fix "FP4QuantizationSFLayout has no member SWIZZLED_128x4" error#1403

Closed
mgoin wants to merge 1 commit intoflashinfer-ai:mainfrom
mgoin:patch-1
Closed

Fix "FP4QuantizationSFLayout has no member SWIZZLED_128x4" error#1403
mgoin wants to merge 1 commit intoflashinfer-ai:mainfrom
mgoin:patch-1

Conversation

@mgoin
Copy link
Copy Markdown

@mgoin mgoin commented Aug 6, 2025

📌 Description

This seems like a missed change in #1389 in renaming SWIZZLED to SWIZZLED_128x4

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @mgoin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a compilation error related to the FP4QuantizationSFLayout enum. It introduces an alias for an existing enum member to ensure compatibility and prevent build failures. This is a targeted fix to resolve a specific symbol lookup issue within the codebase.

Highlights

  • Enum definition fix: Added SWIZZLED_128x4 = SWIZZLED to the FP4QuantizationSFLayout enum in include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh. This resolves a compilation error where SWIZZLED_128x4 was not recognized.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a compilation error by introducing SWIZZLED_128x4 to the FP4QuantizationSFLayout enum, aliasing it to the existing SWIZZLED member. The fix is correct and effectively resolves the build issue. My review includes a suggestion to enhance code clarity by making the more descriptive SWIZZLED_128x4 the primary enum member and SWIZZLED the alias, which would improve long-term maintainability.

Comment on lines 388 to +389
SWIZZLED,
SWIZZLED_128x4 = SWIZZLED,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better clarity and long-term maintainability, it would be preferable to define the more descriptive name SWIZZLED_128x4 first, and then alias SWIZZLED to it. This makes it clear that SWIZZLED_128x4 is the primary name for this layout, which is helpful given the comments describing the 128x4 layout.

  SWIZZLED_128x4,
  SWIZZLED = SWIZZLED_128x4,

@nvpohanh
Copy link
Copy Markdown
Contributor

nvpohanh commented Aug 7, 2025

@yzh119 could you review and merge? Thanks

@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Aug 7, 2025

Where did you see this compilation error?

For more context, FP4QuantizationSFLayout was also defined in other headers where we introduced SWIZZLED_128x4 but the communication kernel currently depend on on older version of it, one of the followup work is to cleanup the code and keep set a single definition of FP4QuantizationSFLayout.

If you encounter an compilation error, this means some other files mistakenly depend on include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh as source of truth of FP4QuantizationSFLayout that we should fix.

@nvpohanh
Copy link
Copy Markdown
Contributor

nvpohanh commented Aug 7, 2025

I am also running into the same issue when enabling AR+RMSNorm fusions in vLLM with H200x2.

(VllmWorker TP0 pid=2216) ERROR 08-07 05:10:25 [multiproc_executor.py:596] FAILED: trtllm_comm/trtllm_moe_allreduce_fusion.cuda.o 
(VllmWorker TP0 pid=2216) ERROR 08-07 05:10:25 [multiproc_executor.py:596] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output trtllm_comm/trtllm_moe_allreduce_fusion.cuda.o.d 
-DTORCH_EXTENSION_NAME=trtllm_comm -DTORCH_API_INCLUDE_EXTENSION_H -DPy_LIMITED_API=0x03090000 -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -D_GLIB
CXX_USE_CXX11_ABI=1 -isystem /usr/include/python3.12 -isystem /home/scratch.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/test_venv2/lib/python3.12/site-packages/torch/include -isystem /home/scratch
.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/test_venv2/lib/python3.12/site-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda/include -isystem /home/scratch.pohanh_sw/vLLM_SGL
ang_Llama3_Llama4_instructions/vllm/flashinfer-source/include -isystem /home/scratch.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/flashinfer-source/csrc -isystem /home/scratch.pohanh_sw/vLLM_SGLang
_Llama3_Llama4_instructions/vllm/flashinfer-source/3rdparty/cutlass/include -isystem /home/scratch.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/flashinfer-source/3rdparty/cutlass/tools/util/include
 -isystem /home/scratch.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/flashinfer-source/3rdparty/spdlog/include --compiler-options=-fPIC --expt-relaxed-constexpr -O3 -std=c++17 --threads=32 -use_fas
t_math -DFLASHINFER_ENABLE_F16 -DFLASHINFER_ENABLE_BF16 -DFLASHINFER_ENABLE_FP8_E4M3 -DFLASHINFER_ENABLE_FP8_E5M2 -DNDEBUG -gencode=arch=compute_100a,code=sm_100a -DFLASHINFER_ENABLE_FP8_E8M0 -DFLASHINFER_E
NABLE_FP4_E2M1 -c /home/scratch.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/flashinfer-source/csrc/trtllm_moe_allreduce_fusion.cu -o trtllm_comm/trtllm_moe_allreduce_fusion.cuda.o 
(VllmWorker TP0 pid=2216) ERROR 08-07 05:10:25 [multiproc_executor.py:596] /home/scratch.pohanh_sw/vLLM_SGLang_Llama3_Llama4_instructions/vllm/flashinfer-source/csrc/trtllm_moe_allreduce_fusion.cu(38): erro
r: enum "flashinfer::trtllm_moe_allreduce_fusion::FP4QuantizationSFLayout" has no member "SWIZZLED_128x4"

@nvpohanh
Copy link
Copy Markdown
Contributor

nvpohanh commented Aug 7, 2025

@yzh119 It is caused by this line: https://github.com/flashinfer-ai/flashinfer/blob/main/csrc/trtllm_moe_allreduce_fusion.cu#L66

I think it is correct that this file depends on include/flashinfer/comm/trtllm_moe_allreduce_fusion.cuh. Do you agree?

@nvpohanh
Copy link
Copy Markdown
Contributor

nvpohanh commented Aug 7, 2025

@yzh119 An alternative fix would be e7fa439

@nvpohanh
Copy link
Copy Markdown
Contributor

nvpohanh commented Aug 7, 2025

Submitted #1410 for my proposed fix. We should choose either this one or #1410

@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Aug 7, 2025

Move to #1410 instead, we should unify these classes in later PRs.

@yzh119 yzh119 closed this Aug 7, 2025
nvpohanh added a commit to nvpohanh/vllm that referenced this pull request Aug 11, 2025
This includes the fix needed for FlashInfer AOT compilation failures.
See: flashinfer-ai/flashinfer#1403 and
flashinfer-ai/flashinfer#1410

Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants