fix moe benchmark#2886
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves inconsistencies between autotuning and actual benchmark results in the DeepSeek MoE benchmark script. The changes ensure that the autotuning process is correctly executed on the default CUDA stream and is seamlessly integrated into the benchmark's execution flow, leading to more accurate and reliable performance measurements. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAutotuning now occurs during the benchmark loop via an Changes
Sequence Diagram(s)sequenceDiagram
participant Bench as Benchmark Runner
participant Auto as Autotune Context
participant Backend as Backend (CuteDSL/CUTLASS/TRTLLM)
participant CUDA as CUDA/GPU
Bench->>Auto: enter with autotune(True) (if enabled)
Bench->>Backend: pre-warm run(**input_kwargs)
Backend->>CUDA: dispatch kernels
CUDA-->>Bench: torch.cuda.synchronize()
loop per-token benchmark
Bench->>Backend: run(token)
Backend->>Auto: first calls trigger tactic profiling
Backend->>CUDA: execute kernels (use cached tactics after profiling)
Backend-->>Bench: timing/result
Bench->>Bench: collect rows_and_histograms
end
Bench->>Auto: exit autotune context
Bench->>Bench: print rows_and_histograms (after autotune logs)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request refactors the DeepSeek-V3 MoE benchmarks by removing the standalone run_autotune function. Autotuning is now integrated directly into the run_benchmark function, with pre-warmup steps added to individual run functions and the benchmark loop wrapped in an autotune context manager. This ensures autotuning completes on the default stream before CUDA-graph capture and guarantees the autotuner sees the correct API/config/weight shapes, preventing cache-key mismatches. The docstring for run_benchmark has been updated to reflect these changes. A review comment suggests moving the contextlib and autotune imports to the top of the file for better code organization and PEP 8 adherence.
| import contextlib | ||
|
|
||
| from flashinfer.autotuner import autotune |
There was a problem hiding this comment.
For better code organization and adherence to Python's style guide (PEP 8), it's recommended to move these imports to the top of the file. Placing all imports at the beginning of a module makes it easier to see its dependencies at a glance.
Please move import contextlib and from flashinfer.autotuner import autotune to the top of the file with the other imports.
## 📌 Description AutoTuner.choose_one's tuning-mode loop calls _prepare_input_tensors(p, inputs) before checking the cache. On a cache hit the synthesized tensors are thrown away, but their torch.rand / torch.randint kernel launches already happened on the device. For any caller that runs choose_one repeatedly inside autotune(True) (e.g. a benchmark sharing its tuning and measurement call-sites), those kernels recur on every warm-cache call and get attributed to the measured region by CUPTI / nsys. Move search_cache above _prepare_input_tensors in the tuning-mode loop. Skip synthesis on cache hit; synthesize as before on cache miss. The lookup passes the caller's inputs (rather than synthesized tensors), which aligns it with the non-tuning branch and the post-loop search_cache. Safe under the existing get_cache_key_extras contract (dtype-like properties preserved by synthesis). ## 🔍 Related Issues #2398 #2886 ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [x] Tests have been added or updated as needed. - [x] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Refactor** * Reduced runtime overhead during tuning by checking the cache before preparing/synthesized inputs, avoiding unnecessary expensive tensor synthesis on cache hits. * **Documentation** * Clarified cache behavior and requirements: cache-key extras must be synthesis-invariant, and input shapes vs. actual inputs influence separate matching steps. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
📌 Description
Fix inconsistent between autotune and real benchmark for deepseek moe benchmark script
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
Bug Fixes
Refactor