Skip to content

docs: Document CUDA version support in README and installation page#2197

Merged
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
bkryu:cuda_version
Dec 11, 2025
Merged

docs: Document CUDA version support in README and installation page#2197
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
bkryu:cuda_version

Conversation

@bkryu
Copy link
Copy Markdown
Collaborator

@bkryu bkryu commented Dec 10, 2025

📌 Description

Add this to public docs CUDA version support 12.6, 12.8, 13.0, 13.1 and state that our goal is to follow PyTorch support + latest CUDA version

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Documentation
    • Updated CUDA support documentation with explicit supported versions: 12.6, 12.8, 13.0, and 13.1.
    • Documented CUDA version prerequisites for installation.
    • Added clarification that supported CUDA versions align with PyTorch's officially supported versions and the latest CUDA release.
    • Enhanced GPU support section header for better clarity.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 10, 2025

Walkthrough

Documentation updates to README.md and docs/installation.rst clarify FlashInfer's CUDA version support. Changes include renaming the GPU Support section header, explicitly listing supported CUDA versions (12.6, 12.8, 13.0, 13.1), and adding notes about alignment with PyTorch's CUDA support policy.

Changes

Cohort / File(s) Summary
Documentation updates
README.md, docs/installation.rst
Added explicit CUDA version support details (12.6, 12.8, 13.0, 13.1) and notes clarifying alignment with PyTorch's supported CUDA versions plus the latest CUDA release. Renamed GPU Support section header to "GPU and CUDA Support".

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~5–10 minutes

  • Verify listed CUDA versions (12.6, 12.8, 13.0, 13.1) are accurate and currently supported
  • Confirm documentation clarity and consistency between README.md and docs/installation.rst

Possibly related PRs

Suggested reviewers

  • yzh119
  • cyx-6
  • wenscarl

Poem

🐰 A rabbit hops through CUDA's lane,
With versions clear, no more refrain!
12.6, 12.8, just right,
Documentation shines so bright! ✨
PyTorch friends aligned at last.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding CUDA version support documentation to README and installation page.
Description check ✅ Passed The description clearly states the purpose (add CUDA versions 12.6, 12.8, 13.0, 13.1 and document PyTorch alignment policy) and includes completed pre-commit and test checklists.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6bb01d1 and 4f906ee.

📒 Files selected for processing (2)
  • README.md (1 hunks)
  • docs/installation.rst (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
🔇 Additional comments (2)
docs/installation.rst (1)

18-21: Approve documentation additions; verify cu129 references elsewhere in the file.

The CUDA versions list and PyTorch alignment note are clear and helpful. However, I notice lines 46 and 117 still reference cu129 in JIT cache installation instructions, which corresponds to CUDA 12.9—a version not listed in the supported versions (12.6, 12.8, 13.0, 13.1).

This creates an inconsistency: either cu129 packages are legitimately available and should be listed in the supported versions, or the installation instructions should be updated to reference only supported CUDA versions.

Please verify:

  1. Are cu129 (CUDA 12.9) packages still available at the referenced indices?
  2. Should cu129 be added to the supported versions list, or should lines 46 and 117 be updated to reference cu128, cu130, or cu131 instead?
README.md (1)

190-196: Approve section rename and CUDA documentation; verify cu129 inconsistency and "latest CUDA release" clarity.

The header rename and explicit CUDA versions documentation improve clarity. The note about following PyTorch support plus the latest CUDA release is informative. However, two concerns:

  1. cu129 inconsistency (lines 64, 116): Installation instructions reference cu129 (CUDA 12.9), which is not listed in the supported versions. This needs alignment.

  2. "Latest CUDA release" vagueness: The note references "the latest CUDA release" without specifying which version. As of December 2025, this appears to be 13.1, but the phrasing could become outdated quickly. Consider being more explicit (e.g., "CUDA 13.1 and potentially newer versions as they are released").

Please verify:

  1. Should cu129 be added to the supported versions, or should installation instructions be updated to reference only cu128, cu130, or cu131?
  2. Would clarifying "the latest CUDA release" to a specific version (or a statement like "the latest available version") improve the documentation?

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request improves the project's documentation by clearly outlining the specific CUDA versions that FlashInfer supports. It also establishes a transparent policy regarding CUDA compatibility, stating the project's intention to align with PyTorch's supported versions and the latest CUDA release. These updates provide users with essential information for setting up their development environments correctly.

Highlights

  • CUDA Version Documentation: Explicitly documented the supported CUDA versions (12.6, 12.8, 13.0, 13.1) in both the README.md and the installation.rst documentation.
  • CUDA Support Policy: Added a note clarifying that FlashInfer aims to follow PyTorch's supported CUDA versions, in addition to supporting the latest CUDA release.
  • Documentation Updates: The 'GPU Support' section in README.md was updated to 'GPU and CUDA Support' to reflect the added detail.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@bkryu bkryu self-assigned this Dec 10, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation in README.md and docs/installation.rst to include the list of supported CUDA versions. The changes are a good addition. However, I've identified that these updates create inconsistencies with the installation examples provided in the same documents, which still refer to an outdated set of CUDA versions. I've left comments suggesting updates to these examples for consistency. Additionally, while the documentation now lists CUDA 13.1 as supported, I noticed there isn't a corresponding Dockerfile.cu131 in the repository, unlike for other versions. It would be helpful to clarify if this is intended or if the Dockerfile will be added in a separate change.

Comment thread README.md

FlashInfer currently provides support for NVIDIA SM architectures 75 and higher and beta support for 103, 110, 120, and 121.

**Supported CUDA Versions:** 12.6, 12.8, 13.0, 13.1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While adding the supported CUDA versions is a great improvement, it creates an inconsistency with the installation examples later in this file. The examples at lines 64-65 and 116-117 still refer to an outdated list of CUDA versions (cu128, cu129, or cu130). To prevent user confusion, please update these examples to align with the new list of supported versions (12.6, 12.8, 13.0, 13.1).

Comment thread docs/installation.rst

- Python: 3.10, 3.11, 3.12, 3.13, 3.14

- CUDA: 12.6, 12.8, 13.0, 13.1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This new list of supported CUDA versions is inconsistent with the installation examples provided in this document at lines 46-47 and 117-118. The examples still refer to cu128, cu129, or cu130. Please update them to match the newly documented supported versions to ensure clarity for users.

@bkryu
Copy link
Copy Markdown
Collaborator Author

bkryu commented Dec 11, 2025

I would suggest skipping FlashInfer bot's internal comprehensive unit test because this only changes an .md and .rst file

@sricketts sricketts added this to the 2025 Dec milestone Dec 11, 2025
@yzh119 yzh119 enabled auto-merge (squash) December 11, 2025 02:13
@yzh119 yzh119 disabled auto-merge December 11, 2025 06:07
@yzh119 yzh119 merged commit dc0ade7 into flashinfer-ai:main Dec 11, 2025
3 checks passed
@bkryu bkryu deleted the cuda_version branch December 11, 2025 17:55
BingooYang pushed a commit to BingooYang/flashinfer that referenced this pull request Mar 13, 2026
…lashinfer-ai#2197)

<!-- .github/pull_request_template.md -->

## 📌 Description

Add this to public docs CUDA version support 12.6, 12.8, 13.0, 13.1 and
state that our goal is to follow PyTorch support + latest CUDA version


<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated CUDA support documentation with explicit supported versions:
12.6, 12.8, 13.0, and 13.1.
  * Documented CUDA version prerequisites for installation.
* Added clarification that supported CUDA versions align with PyTorch's
officially supported versions and the latest CUDA release.
  * Enhanced GPU support section header for better clarity.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants