Enable renormalize(naive) routing for fp8 per-tensor#2030
Merged
IwakuraRein merged 2 commits intomainfrom Nov 11, 2025
Merged
Conversation
Contributor
WalkthroughExpose a new Changes
Sequence DiagramsequenceDiagram
autonumber
participant Launcher as Kernel Launcher
participant Workspace as MoE Workspace
participant Runner as PermuteGemm1
participant Finalize as Finalize Kernel
Note over Launcher: check routing_method_type
alt routing_method_type == Llama4
Launcher->>Workspace: workspace.token_scales = expert_weights.data_ptr()
else other routing
Launcher->>Workspace: token_scales left nullptr/unmodified
end
Launcher->>Workspace: prepare workspace
Workspace->>Runner: mPermuteGemm1.run(input = token_scales, ...)
Runner->>Runner: permute / GEMM using token_scales
Workspace->>Finalize: provide expert_weights for finalize
Finalize->>Finalize: finalize outputs
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
5a48ddf to
17fb87a
Compare
jiahanc
approved these changes
Nov 10, 2025
17fb87a to
d42fb90
Compare
Collaborator
|
/bot run |
Collaborator
Signed-off-by: siyuanf <siyuanf@nvidia.com>
Collaborator
|
/bot run |
Collaborator
Collaborator
|
/bot run |
Collaborator
5 tasks
BingooYang
pushed a commit
to BingooYang/flashinfer
that referenced
this pull request
Mar 13, 2026
) <!-- .github/pull_request_template.md --> ## 📌 Description Disable expert weights in the FC1 except for Llama routing. ## 🔍 Related Issues <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [ ] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [ ] I have installed the hooks with `pre-commit install`. - [ ] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [ ] Tests have been added or updated as needed. - [ ] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Bug Fixes** * Re-enabled Renormalize routing that was previously blocked. * Made token_scales available for Llama4 routing. * Corrected GEMM1 input so the proper data source is used during MoE processing. * **Tests** * Added FP8PerTensorMoe to test parameterization. * Expanded Renormalize and DeepSeekV3 test coverage and removed related skips. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: siyuanf <siyuanf@nvidia.com>
5 tasks
This was referenced Apr 12, 2026
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📌 Description
Disable expert weights in the FC1 except for Llama routing.
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
Bug Fixes
Tests