Skip to content

[Do Not Merge]Fix after flashinfer fp4 autotuner pr#23209

Closed
IwakuraRein wants to merge 10 commits intovllm-project:mainfrom
IwakuraRein:flashinfer-autotuner-upd
Closed

[Do Not Merge]Fix after flashinfer fp4 autotuner pr#23209
IwakuraRein wants to merge 10 commits intovllm-project:mainfrom
IwakuraRein:flashinfer-autotuner-upd

Conversation

@IwakuraRein
Copy link
Copy Markdown
Contributor

@IwakuraRein IwakuraRein commented Aug 19, 2025

Can be merged after flashinfer address the AOT installation.

Purpose

The flashinfer fp4 autotuner is merged. Need to update the api call in the mxfp4 moe.

  • Fix the x_scale shape and use a hardcoded max tunning number of tokens in the mxfp4 moe.
  • Move kernel_warmup above the self.model_runner.capture_model()
  • bump flashinfer tag to 0.2.13

Test Plan

python benchmarks/benchmark_throughput.py \
    --backend vllm \
    --async-engine \
    --model openai/gpt-oss-120b \
    --num-prompts 2048 \
    --input-len 1024 \
    --output-len 1024 \
    --max-num-seqs 512 \
    --max_model_len 3072 \
    --compilation-config='{"pass_config": {"enable_fi_allreduce_fusion": true, "fi_allreduce_fusion_max_token_num": 3072}, "custom_ops": ["+rms_norm"], "level":3}' \
    -tp 1

Test Result

On B200, VLLM_USE_FLASHINFER_MOE_MXFP4_MXFP8=1:

  • without autotuner

    Throughput: 11.72 requests/s, 23988.62 total tokens/s, 12000.38 output tokens/s
    Total num prompt tokens:  2095032
    Total num output tokens:  2097152
    
  • with autotuner

    Throughput: 12.91 requests/s, 26420.97 total tokens/s, 13215.44 output tokens/s
    Total num prompt tokens:  2095580
    Total num output tokens:  2097152
    

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify Bot added the v1 label Aug 19, 2025
Comment thread vllm/v1/worker/gpu_worker.py Outdated
Comment thread vllm/v1/worker/gpu_worker.py Outdated
Comment on lines 341 to 342
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this flag and to run twice? I think we can just move this before cuda graphs like your original commit

Copy link
Copy Markdown
Contributor Author

@IwakuraRein IwakuraRein Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because I was thinking that auto-tuning and warm-up serve two different purposes here. Auto-tuning is meant to store the best kernel function index, so I placed it before cuda graph capture to make sure cuda graph sees the correct kernel. Warm-up is a dry run before actual job starts, so I added it right before the real execution just like original codes. Based on your earlier comment, I thought you were suggesting that warm-up is necessary (maybe DeepGEMM requires it?). Please correct me if I’ve misunderstood.

@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Aug 21, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @IwakuraRein.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify Bot added the needs-rebase label Aug 21, 2025
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Signed-off-by: siyuanf <siyuanf@nvidia.com>
Comment thread vllm/model_executor/layers/quantization/mxfp4.py Outdated
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Copy link
Copy Markdown
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, could you also add a E2E accuracy test using lm-eval?

Comment thread setup.py
@IwakuraRein
Copy link
Copy Markdown
Contributor Author

Looks good, could you also add a E2E accuracy test using lm-eval?

Hi @yewentao256 . I have experimented with simple_evals:

metric env max model len tp reasoning effort result
mmlu VLLM_USE_FLASHINFER_MOE_MXFP4_MXFP8 32768 1 high 0.886483
mmlu VLLM_USE_FLASHINFER_MOE_MXFP4_BF16 32768 1 high 0.889118

Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Copy link
Copy Markdown
Collaborator

@ProExpertProg ProExpertProg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good except address todo

Comment thread vllm/model_executor/layers/quantization/mxfp4.py Outdated
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
Signed-off-by: Siyuan Fu <siyuanf@nvidia.com>
@IwakuraRein IwakuraRein changed the title Fix after flashinfer fp4 autotuner pr [Do Not Merge]Fix after flashinfer fp4 autotuner pr Aug 22, 2025
@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Aug 23, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @IwakuraRein.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify Bot added the needs-rebase label Aug 23, 2025
Copy link
Copy Markdown
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the work!

@IwakuraRein
Copy link
Copy Markdown
Contributor Author

Closed after #23537

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants