Skip to content

Undo fix to AutoTuner find_nearest_profile#2697

Merged
aleozlx merged 2 commits intoflashinfer-ai:mainfrom
danisereb:undo_tuner_fix
Mar 6, 2026
Merged

Undo fix to AutoTuner find_nearest_profile#2697
aleozlx merged 2 commits intoflashinfer-ai:mainfrom
danisereb:undo_tuner_fix

Conversation

@danisereb
Copy link
Copy Markdown
Contributor

@danisereb danisereb commented Mar 5, 2026

📌 Description

PR #2617 added a fix that solves "using fallback tactic" for TRTLLM MoE kernels.

But after running more tests (lm_eval) with flashinfer v0.6.5 another issue was found -
an error from C++ file csrc/trtllm_fused_moe_kernel_launcher.cu (key not found in launchers_map.at(tile_N)).

Fixing this is probably not simple, more details in this draft PR (NOT for v0.6.6):
#2695

In order to prevent the crash, the change in _find_nearest_profile will be reverted (to match flashinfer v0.6.4).

The relevant AutoTuner tests were marked with "skip":

tests/autotuner/test_autotuner_core.py::test_find_nearest_profile_moe_shared_num_tokens_axis[1000-512] SKIPPED (_find_nearest_profile linked-dimension mapping was reverted;...)
tests/autotuner/test_autotuner_core.py::test_find_nearest_profile_moe_shared_num_tokens_axis[4000-2048] SKIPPED (_find_nearest_profile linked-dimension mapping was reverted...)
tests/autotuner/test_autotuner_core.py::test_find_nearest_profile_moe_shared_num_tokens_axis[8000-4096] SKIPPED (_find_nearest_profile linked-dimension mapping was reverted...)
tests/autotuner/test_autotuner_core.py::test_find_nearest_profile_moe_shared_num_tokens_axis[12000-8192] SKIPPED (_find_nearest_profile linked-dimension mapping was reverte...)
tests/autotuner/test_autotuner_core.py::test_find_nearest_profile_moe_same_bucket_same_profile SKIPPED (_find_nearest_profile linked-dimension mapping was reverted; re-enab...)
tests/autotuner/test_autotuner_core.py::test_find_nearest_profile_maps_all_linked_dims SKIPPED (_find_nearest_profile linked-dimension mapping was reverted; re-enable when ...)

The AutoTuner rest of the tests are all successful:

pytest --tb short  tests/autotuner/
================================================================================= test session starts ==================================================================================
platform linux -- Python 3.12.3, pytest-9.0.2, pluggy-1.6.0
rootdir: /my_home/workspace/dani_flashinfer
configfile: pytest.ini
plugins: anyio-4.12.1
collected 39 items                                                                                                                                                                     

tests/autotuner/test_autotuner_bmm_fp8.py ............                                                                                                                           [ 30%]
tests/autotuner/test_autotuner_core.py ...........ssssss..........                                                                                                               [100%]

============================================================================ 33 passed, 6 skipped in 0.95s =============================================================================

Using this branch, the failure from trtllm_fused_moe_kernel_launcher.cu does not happen.

vLLM main still uses flashinfer v0.6.4 (that does not include PR #2617).

This change should be included in flashinfer v0.6.6 (for use by vLLM).

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Tests
    • Temporarily disabled three autotuner tests pending restoration of linked-dimension bucket propagation functionality. Tests will be re-enabled once related features are restored.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

This pull request reverts linked-dimension value propagation in the _find_nearest_profile function by removing the loop that updated dependent dimensions. Three related tests are disabled with notes indicating the revert should be temporary pending linked-dimension bucket propagation restoration.

Changes

Cohort / File(s) Summary
Core autotuner logic revert
flashinfer/autotuner.py
Removes the propagation loop in _find_nearest_profile that previously spread mapped values across linked dimensions; now only updates the first entry while preserving constraint handling.
Test disabling
tests/autotuner/test_autotuner_core.py
Adds pytest.mark.skip decorators to three test functions: test_find_nearest_profile_moe_shared_num_tokens_axis, test_find_nearest_profile_moe_same_bucket_same_profile, and test_find_nearest_profile_maps_all_linked_dims. Each skip includes a reason noting the revert and re-enablement expectation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Poem

🐰 A hop backward, a loop now gone,
Linked dimensions part, the chain withdrawn,
Tests sleep gently, marked to skip,
Awaiting restoration's future trip! 🌙

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The title directly describes the main change: reverting a fix to the _find_nearest_profile function, which aligns with the primary purpose of this PR.
Description check ✅ Passed The PR description provides detailed context about the reversion and its rationale, though it lacks explicit sections matching the template structure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue in the autotuner by reverting a prior change within the _find_nearest_profile function. The previous modification had introduced an incorrect propagation of mapped bucket values across multiple linked dimensions for dynamic tensor specifications. The current update restores the logic to correctly apply the mapping only to the primary dimension, which is crucial for accurate performance profile generation during autotuning.

Highlights

  • Autotuner Logic Reversion: Reverted a previous modification in the _find_nearest_profile function that incorrectly propagated mapped bucket values across all linked dimensions for dynamic tensor specifications.
  • Corrected Dimension Mapping: Ensured that the mapping of dynamic tensor specifications to tuning buckets is applied only to the primary dimension, restoring the intended behavior for profile generation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • flashinfer/autotuner.py
    • Removed the loop responsible for applying a mapped bucket value to all linked dimensions within dynamic tensor specifications in _find_nearest_profile.
Activity
  • No review activity or comments have been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request reverts a previous fix in _find_nearest_profile, which appears to re-introduce a bug. The change removes logic that updates all linked dimensions in a DynamicTensorSpec, instead only updating the first one. This breaks the intended behavior of linked dimensions and will likely cause tests to fail. I've provided a critical review comment recommending that this change be reverted and the original logic restored.

Comment thread flashinfer/autotuner.py
@danisereb danisereb marked this pull request as ready for review March 5, 2026 15:41
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/autotuner/test_autotuner_core.py (1)

134-139: Add a concrete tracking reference to these temporary skips.

The skip reasons are clear, but at Line 136, Line 166, and Line 190 they don’t include an issue/PR pointer for restoration. Please append a concrete reference (for example, the draft fix PR) so these don’t silently become permanent.

Suggested tweak
 `@pytest.mark.skip`(
     reason=(
         "_find_nearest_profile linked-dimension mapping was reverted; "
-        "re-enable when linked-dim bucket propagation is restored."
+        "re-enable when linked-dim bucket propagation is restored (see PR `#2695`)."
     )
 )

Also applies to: 164-169, 188-193

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/autotuner/test_autotuner_core.py` around lines 134 - 139, The skip
decorators using pytest.mark.skip that cite "_find_nearest_profile
linked-dimension mapping was reverted; re-enable when linked-dim bucket
propagation is restored." need a concrete tracking reference appended to their
reason strings (e.g., "see PR `#1234`" or an issue ID) so the temporary skip can
be traced; update each pytest.mark.skip instance in
tests/autotuner/test_autotuner_core.py that mentions _find_nearest_profile (the
three occurrences) to append a stable issue/PR pointer to the reason text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flashinfer/autotuner.py`:
- Around line 773-776: The code in _find_nearest_profile() currently maps only
the first linked dimension (using spec.input_idx[0] / spec.dim_idx[0]) which
diverges from _generate_optimization_profiles() that materializes buckets for
all linked dimensions; update _find_nearest_profile() to iterate over all linked
dimensions in spec.dim_idx / spec.input_idx and call
spec.map_to_tuning_buckets(...) for each corresponding
base_profile[input_idx][dim_idx] entry so the tuned profile keys match the
generated optimization profiles (alternatively, if this behavior should be
limited, gate the existing single-dimension mapping behind the specific
kernel/path check used by the TRTLLM MoE code path).

---

Nitpick comments:
In `@tests/autotuner/test_autotuner_core.py`:
- Around line 134-139: The skip decorators using pytest.mark.skip that cite
"_find_nearest_profile linked-dimension mapping was reverted; re-enable when
linked-dim bucket propagation is restored." need a concrete tracking reference
appended to their reason strings (e.g., "see PR `#1234`" or an issue ID) so the
temporary skip can be traced; update each pytest.mark.skip instance in
tests/autotuner/test_autotuner_core.py that mentions _find_nearest_profile (the
three occurrences) to append a stable issue/PR pointer to the reason text.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 00156c80-048e-4cb1-add9-75ab48a8992d

📥 Commits

Reviewing files that changed from the base of the PR and between 95059e8 and ddc7708.

📒 Files selected for processing (2)
  • flashinfer/autotuner.py
  • tests/autotuner/test_autotuner_core.py

Comment thread flashinfer/autotuner.py
@danisereb danisereb changed the title Undo fix to _find_nearest_profile Undo fix to AutoTuner _find_nearest_profile Mar 5, 2026
@danisereb danisereb changed the title Undo fix to AutoTuner _find_nearest_profile Undo fix to AutoTuner find_nearest_profile Mar 5, 2026
@bkryu
Copy link
Copy Markdown
Collaborator

bkryu commented Mar 5, 2026

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !383 has been created, and the CI pipeline #45441918 is currently running. I'll report back once the pipeline job completes.

Copy link
Copy Markdown
Collaborator

@bkryu bkryu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR is a revert of the changes in #2617.

Unit test changes are simply adding skips so no failures expected to be introduced.

However, I launched internal unit testing CI. Let's wait for the results to come back just in case.

@bkryu bkryu added the v0.6.6 release blocker label for 0.6.6 label Mar 5, 2026
@aleozlx aleozlx added the run-ci label Mar 5, 2026
@aleozlx aleozlx mentioned this pull request Mar 5, 2026
@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[SUCCESS] Pipeline #45441918: 9/20 passed

@aleozlx aleozlx enabled auto-merge (squash) March 6, 2026 03:19
@aleozlx
Copy link
Copy Markdown
Collaborator

aleozlx commented Mar 6, 2026

bot run tests are clean

@aleozlx aleozlx merged commit 44abf50 into flashinfer-ai:main Mar 6, 2026
87 of 89 checks passed
@zianglih
Copy link
Copy Markdown
Contributor

Can confirm SGLang + v0.6.5 also hits the issue:

root@B200-137:/sgl-workspace/sglang# python3 test/registered/backends/test_flashinfer_trtllm_gen_moe_backend.py TestFlashinferTrtllmGenMoeBackendMXFP8
...
[2026-03-10 05:19:55] Successfully reserved port 21000 on host '127.0.0.1'
[2026-03-10 05:19:55] INFO:     Started server process [220378]
[2026-03-10 05:19:55] INFO:     Waiting for application startup.
[2026-03-10 05:19:55] Using default chat sampling params from model generation config: {'repetition_penalty': 1.0, 'temperature': 0.7, 'top_k': 20, 'top_p': 0.8}
[2026-03-10 05:19:55] INFO:     Application startup complete.
[2026-03-10 05:19:55] INFO:     Uvicorn running on socket ('127.0.0.1', 21000) (Press CTRL+C to quit)
[2026-03-10 05:19:56] INFO:     127.0.0.1:46216 - "GET /model_info HTTP/1.1" 200 OK
[2026-03-10 05:19:56 TP0 EP0] Prefill batch, #new-seq: 1, #new-token: 64, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0, input throughput (token/s): 0.00, cuda graph: False
[2026-03-10 05:19:56] INFO:     127.0.0.1:46224 - "POST /generate HTTP/1.1" 200 OK
[2026-03-10 05:19:56] The server is fired up and ready to roll!
[2026-03-10 05:19:58 TP0 EP0] Prefill batch, #new-seq: 1, #new-token: 64, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0, input throughput (token/s): 42.18, cuda graph: False
[2026-03-10 05:19:59] INFO:     127.0.0.1:46228 - "GET /health_generate HTTP/1.1" 200 OK
[CI Test Method] TestFlashinferTrtllmGenMoeBackendMXFP8.test_gsm8k
/sgl-workspace/sglang/python/sglang/test/few_shot_gsm8k.py:54: DeprecationWarning: Including the scheme in --host ('http://127.0.0.1') is deprecated. Pass just the hostname (e.g. '127.0.0.1') instead.
  set_default_backend(RuntimeEndpoint(normalize_base_url(args.host, args.port)))
[2026-03-10 05:19:59] Endpoint '/get_model_info' is deprecated and will be removed in a future version. Please use '/model_info' instead.
[2026-03-10 05:19:59] INFO:     127.0.0.1:46238 - "GET /get_model_info HTTP/1.1" 200 OK
[2026-03-10 05:19:59 TP0 EP0] Scheduler hit an exception: Traceback (most recent call last):
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 3372, in run_scheduler_process
    scheduler.run_event_loop()
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1235, in run_event_loop
    dispatch_event_loop(self)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 3248, in dispatch_event_loop
    scheduler.event_loop_overlap()
  File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1298, in event_loop_overlap
    batch_result = self.run_batch(batch)
                   ^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2486, in run_batch
    batch_result = self.model_worker.forward_batch_generation(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker.py", line 472, in forward_batch_generation
    out = self.model_runner.forward(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 2459, in forward
    output = self._forward_raw(
             ^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 2561, in _forward_raw
    ret, can_run_graph = self.forward_extend(
                         ^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 2396, in forward_extend
    self.model.forward(
  File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/models/qwen3_moe.py", line 939, in forward
    hidden_states = self.model(
                    ^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/models/qwen2_moe.py", line 656, in forward
    hidden_states, residual = layer(
                              ^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/models/qwen3_moe.py", line 804, in forward
    hidden_states = self.mlp(
                    ^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/models/qwen3_moe.py", line 277, in forward
    return self.forward_normal(
           ^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/models/qwen3_moe.py", line 305, in forward_normal
    final_hidden_states = self.experts(hidden_states, topk_output)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/layer.py", line 987, in forward
    return self.forward_impl(hidden_states, topk_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/layer.py", line 1006, in forward_impl
    combine_input = self.run_moe_core(
                    ^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/layer.py", line 1027, in run_moe_core
    return self.quant_method.apply(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/quantization/fp8.py", line 1659, in apply
    return self.runner.run(dispatch_output, quant_info)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/moe_runner/runner.py", line 81, in run
    return self.fused_func(dispatch_output, quant_info, self.config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/moe_runner/flashinfer_trtllm.py", line 773, in fused_experts_none_to_flashinfer_trtllm
    return fused_experts_none_to_flashinfer_trtllm_fp8(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/moe_runner/flashinfer_trtllm.py", line 408, in fused_experts_none_to_flashinfer_trtllm_fp8
    output = trtllm_fp8_block_scale_moe(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/flashinfer/fused_moe/core.py", line 2585, in trtllm_fp8_block_scale_moe
    result = get_trtllm_moe_sm100_module().trtllm_fp8_block_scale_moe(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/flashinfer/fused_moe/core.py", line 1757, in trtllm_fp8_block_scale_moe_op
    intermediate_output = moe_op.trtllm_fp8_block_scale_moe(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/tvm_ffi/cython/function.pxi", line 923, in tvm_ffi.core.Function.__call__
RuntimeError: unordered_map::at
...

aleozlx pushed a commit that referenced this pull request Mar 20, 2026
…e launchers for all supported tileN in trtllm fused MoE (#2821)

## 📌 Description

It fixes two autotuner related bugs:
1. Revert back the autotuner fix that was reverted in
#2697
2. Fix the issue that
#2697 revealed, which is
trtllm fused MoE kernel launcher crash when it receives tileN that is
supported but filtered out by `computeSelectedTileN`, by creating kernel
launchers for all supported tileN values.

This PR continues the work in
#2695 by @danisereb to
revert bugfix 1 and to fix bug 2.

More technical details:
### Bug 1:
When given num_tokens that isn't a power-of-2, the autotuner (python
side) fails to find its appropriate entry in the autotuner cache, so it
falls back to passing default, which means passing `[-1, -1]` as the
`(tileN, tactic)` to the CPP.
It was fixed in [this
PR](https://github.com/flashinfer-ai/flashinfer/pull/2617/changes#diff-1964ab957d8185d04b0d5f0cb02d0c7c0a3260ac0a6c573167af6875ab0b0e87L729-L734)
but soon after merge, it was reverted
[here](#2697), as it
exposed the next bug.

### Bug 2 (exposed after fixing bug 1):
Crash in fused MoE kernel launcher on forward pass on some values of
num_tokens. The crash is at `launchers_map.at(tile_N)` in
`trtllm_fused_moe_kernel_launcher.cu`. It happens because:
The python side of the autotuner profiles num_tokens that are power of
2, and each such value represents the range until the next power of 2.
e.g.: The profile for the range `[2048, 4095]` is done on
num_tokens=2048.

`computeSelectedTileN` function in `trtllm_fused_moe_kernel_launcher.cu`
reduces the set of supported tileN values (to reduce the autotuner's
search space), by choosing specific values from the supported tileN
sorted list, the values are: `roundUpToPowerOfTwo(num_tokens * topK /
numExperts)`, its previous one, and its next 2 values (max value is
256). So values in the same range can get different sets of tileN
values.
For example, on Nemotron 3 Super NVFP4:
- `num_tokens=2048` -> `2048*22/512 = 88`, which rounds up to 128, so
the tileN set is `(64, 128, 256)`
- `num_tokens=3003` -> `3003*22/512 = 129.03`, which rounds up to 256,
so the tileN set is `(128, 256)`
In case `tileN=64` was found to be the fastest on `num_tokens=2048` for
range `[2048, 4095]`, when given `num_tokens=3003`, the python side
would pass `[64, someTactic]` to the CPP, but for `num_tokens=3003`,
there's no launcher for `tileN=64` as `computeSelectedTileN` filtered it
out.


## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Stricter MoE tile validation and ensured all supported tiles are
available at launch to avoid missing kernel configurations.
* Autotuner mapping for linked dynamic dimensions now yields consistent
cached bucket values.

* **Tests**
* Added SM100 MoE autotuner integration tests (including
invalid-cached-tactic checks).
* Re-enabled and expanded autotuner unit tests and added a test utility
to reset the autotuner.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: amitz-nv <203509407+amitz-nv@users.noreply.github.com>
Co-authored-by: Daniel Serebrenik <daserebrenik@nvidia.com>
murphymatt pushed a commit to fw-ai/flashinfer that referenced this pull request Mar 31, 2026
…e launchers for all supported tileN in trtllm fused MoE (#2821)

## 📌 Description

It fixes two autotuner related bugs:
1. Revert back the autotuner fix that was reverted in
flashinfer-ai/flashinfer#2697
2. Fix the issue that
flashinfer-ai/flashinfer#2697 revealed, which is
trtllm fused MoE kernel launcher crash when it receives tileN that is
supported but filtered out by `computeSelectedTileN`, by creating kernel
launchers for all supported tileN values.

This PR continues the work in
flashinfer-ai/flashinfer#2695 by @danisereb to
revert bugfix 1 and to fix bug 2.

More technical details:
### Bug 1:
When given num_tokens that isn't a power-of-2, the autotuner (python
side) fails to find its appropriate entry in the autotuner cache, so it
falls back to passing default, which means passing `[-1, -1]` as the
`(tileN, tactic)` to the CPP.
It was fixed in [this
PR](https://github.com/flashinfer-ai/flashinfer/pull/2617/changes#diff-1964ab957d8185d04b0d5f0cb02d0c7c0a3260ac0a6c573167af6875ab0b0e87L729-L734)
but soon after merge, it was reverted
[here](flashinfer-ai/flashinfer#2697), as it
exposed the next bug.

### Bug 2 (exposed after fixing bug 1):
Crash in fused MoE kernel launcher on forward pass on some values of
num_tokens. The crash is at `launchers_map.at(tile_N)` in
`trtllm_fused_moe_kernel_launcher.cu`. It happens because:
The python side of the autotuner profiles num_tokens that are power of
2, and each such value represents the range until the next power of 2.
e.g.: The profile for the range `[2048, 4095]` is done on
num_tokens=2048.

`computeSelectedTileN` function in `trtllm_fused_moe_kernel_launcher.cu`
reduces the set of supported tileN values (to reduce the autotuner's
search space), by choosing specific values from the supported tileN
sorted list, the values are: `roundUpToPowerOfTwo(num_tokens * topK /
numExperts)`, its previous one, and its next 2 values (max value is
256). So values in the same range can get different sets of tileN
values.
For example, on Nemotron 3 Super NVFP4:
- `num_tokens=2048` -> `2048*22/512 = 88`, which rounds up to 128, so
the tileN set is `(64, 128, 256)`
- `num_tokens=3003` -> `3003*22/512 = 129.03`, which rounds up to 256,
so the tileN set is `(128, 256)`
In case `tileN=64` was found to be the fastest on `num_tokens=2048` for
range `[2048, 4095]`, when given `num_tokens=3003`, the python side
would pass `[64, someTactic]` to the CPP, but for `num_tokens=3003`,
there's no launcher for `tileN=64` as `computeSelectedTileN` filtered it
out.


## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Stricter MoE tile validation and ensured all supported tiles are
available at launch to avoid missing kernel configurations.
* Autotuner mapping for linked dynamic dimensions now yields consistent
cached bucket values.

* **Tests**
* Added SM100 MoE autotuner integration tests (including
invalid-cached-tactic checks).
* Re-enabled and expanded autotuner unit tests and added a test utility
to reset the autotuner.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: amitz-nv <203509407+amitz-nv@users.noreply.github.com>
Co-authored-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

run-ci v0.6.6 release blocker label for 0.6.6

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants