Skip to content

test: Skip sm90 test in test_jit_warmup.py if not on sm90#2189

Merged
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
bkryu:test_jit_warmup_skip_sm90
Dec 9, 2025
Merged

test: Skip sm90 test in test_jit_warmup.py if not on sm90#2189
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
bkryu:test_jit_warmup_skip_sm90

Conversation

@bkryu
Copy link
Copy Markdown
Collaborator

@bkryu bkryu commented Dec 9, 2025

📌 Description

test_jit_warmup.py contains two tests test_warmpup_llama and test_warmpup_llama_sm90 where the _sm90 variant is the first one plus an additional fa3 prefill compilation only supported on SM90.

While under general circumstances, this test should pass on non-sm90 cards, it might fail if the environment contains an incomplete flashinfer-jit-cache that is missing sm90 kernels and FLASHINFER_CUDA_ARCH_LIST is not correctly set to include 9.0a.

Since:

  1. This can interfere with unit testing by causing unexpected failures on non SM90 cards and
  2. On non SM90 cards, there is no additional gain from testing sm90 kernel compilation

current PR disables it.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Tests
    • Updated test execution conditions to ensure tests run only on supported hardware configurations.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 9, 2025

Walkthrough

A pytest skip decorator is added to test_warmpup_llama_sm90 to gate test execution on CUDA SM90 capability. The decorator skips the test on systems without SM90 architecture support, as the fa3 backend requires it.

Changes

Cohort / File(s) Change Summary
Test skip guard
tests/utils/test_jit_warmup.py
Added @pytest.mark.skipif() decorator to test_warmpup_llama_sm90 to skip execution when CUDA device capability is not SM90 (CUDA 9.0)

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

  • Single-file change consisting of a straightforward pytest decorator addition
  • No logic changes or complex interactions

Possibly related PRs

  • flashinfer-ai/flashinfer#2060: Adds similar CUDA SM-architecture-based pytest skip guards to tests (that PR targets SM10x, this PR targets SM90)

Suggested reviewers

  • nvmbreughe
  • yzh119

Poem

🐰 A test that skipped with glee,
SM90 only, you see!
The fa3 backend's delight,
Now gated just right,
CUDA nines dance wild and free! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding a conditional skip decorator to the SM90-specific test based on GPU capability.
Description check ✅ Passed The description includes a comprehensive explanation of what was changed and why, addressing the rationale for disabling the SM90 test and acknowledging pre-commit checks.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5fe01a2 and 85017b1.

📒 Files selected for processing (1)
  • tests/utils/test_jit_warmup.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
🔇 Additional comments (2)
tests/utils/test_jit_warmup.py (2)

18-18: Importing pytest is appropriate for the new skip marker

The pytest import is required for the @pytest.mark.skipif decorator below; no issues here.


61-64: Guard torch.cuda.get_device_capability with torch.cuda.is_available to avoid test collection failures on non‑CUDA setups

The skip condition evaluates torch.cuda.get_device_capability() at import/collection time. On systems without CUDA support or no CUDA device available, this will raise an exception and fail the entire test module instead of cleanly skipping the test.

Guard the call with torch.cuda.is_available() to ensure the expression short‑circuits:

@pytest.mark.skipif(
-    torch.cuda.get_device_capability()[0] != 9,
+    (not torch.cuda.is_available()) or torch.cuda.get_device_capability()[0] != 9,
    reason="fa3 backend is only supported on SM90",
)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the stability and relevance of the test suite by preventing a specific test, test_warmpup_llama_sm90, from running on hardware configurations that do not support its intended functionality. By conditionally skipping this test on non-SM90 GPUs, it eliminates potential failures caused by environmental mismatches or incomplete JIT caches, ensuring a cleaner and more accurate test pass rate for various setups.

Highlights

  • Test Skipping Logic: Implemented a conditional skip for the test_warmpup_llama_sm90 test, ensuring it only runs on NVIDIA GPUs with SM90 compute capability.
  • Dependency Addition: Added an import for the pytest library to enable the use of its marking functionalities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly adds a pytest.mark.skipif decorator to skip a test for the SM90 architecture when not running on corresponding hardware. My review includes a suggestion to make this condition more robust by checking for CUDA availability and using a more precise utility function that also verifies the required CUDA version. This will improve test reliability and maintain consistency with the library's internal checks.

As a minor note, I noticed a typo in the test function names: test_warmpup_llama and test_warmpup_llama_sm90. You might want to correct this to test_warmup_... for better clarity and consistency.

Comment on lines +61 to +64
@pytest.mark.skipif(
torch.cuda.get_device_capability()[0] != 9,
reason="fa3 backend is only supported on SM90",
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The skipif condition can be made more robust and accurate.

  1. It should check if CUDA is available before attempting to get device capabilities. This will prevent test collection errors in non-CUDA environments.
  2. It's better to use the existing helper flashinfer.utils.is_sm90a_supported to check for SM90 support. This function is more precise as it checks for both the compute capability (major version 9) and the required minimum CUDA version (>=12.3), which is necessary for fa3 backend support.

Using this helper makes the test condition consistent with the implementation's support checks.

@pytest.mark.skipif(
    not torch.cuda.is_available()
    or not flashinfer.utils.is_sm90a_supported(torch.device("cuda")),
    reason="fa3 backend requires SM90 with CUDA >= 12.3",
)

@bkryu
Copy link
Copy Markdown
Collaborator Author

bkryu commented Dec 9, 2025

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !184 has been created, and the CI pipeline #39847511 is currently running. I'll report back once the pipeline job completes.

@yzh119 yzh119 enabled auto-merge (squash) December 9, 2025 02:15
@yzh119 yzh119 merged commit abcd8e0 into flashinfer-ai:main Dec 9, 2025
4 checks passed
@bkryu bkryu deleted the test_jit_warmup_skip_sm90 branch December 9, 2025 22:02
BingooYang pushed a commit to BingooYang/flashinfer that referenced this pull request Mar 13, 2026
…-ai#2189)

<!-- .github/pull_request_template.md -->

## 📌 Description

`test_jit_warmup.py` contains two tests `test_warmpup_llama` and
`test_warmpup_llama_sm90` where the _sm90 variant is the first one plus
an additional fa3 prefill compilation only supported on SM90.

While under general circumstances, this test should pass on non-sm90
cards, it might fail if the environment contains an incomplete
`flashinfer-jit-cache` that is missing sm90 kernels and
`FLASHINFER_CUDA_ARCH_LIST` is not correctly set to include 9.0a.

Since:
1. This can interfere with unit testing by causing unexpected failures
on non SM90 cards and
2. On non SM90 cards, there is no additional gain from testing sm90
kernel compilation

current PR disables it.

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Tests**
* Updated test execution conditions to ensure tests run only on
supported hardware configurations.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants