docs: Document CUDA version support in README and installation page#2197
docs: Document CUDA version support in README and installation page#2197yzh119 merged 1 commit intoflashinfer-ai:mainfrom
Conversation
WalkthroughDocumentation updates to README.md and docs/installation.rst clarify FlashInfer's CUDA version support. Changes include renaming the GPU Support section header, explicitly listing supported CUDA versions (12.6, 12.8, 13.0, 13.1), and adding notes about alignment with PyTorch's CUDA support policy. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~5–10 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request improves the project's documentation by clearly outlining the specific CUDA versions that FlashInfer supports. It also establishes a transparent policy regarding CUDA compatibility, stating the project's intention to align with PyTorch's supported versions and the latest CUDA release. These updates provide users with essential information for setting up their development environments correctly. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the documentation in README.md and docs/installation.rst to include the list of supported CUDA versions. The changes are a good addition. However, I've identified that these updates create inconsistencies with the installation examples provided in the same documents, which still refer to an outdated set of CUDA versions. I've left comments suggesting updates to these examples for consistency. Additionally, while the documentation now lists CUDA 13.1 as supported, I noticed there isn't a corresponding Dockerfile.cu131 in the repository, unlike for other versions. It would be helpful to clarify if this is intended or if the Dockerfile will be added in a separate change.
|
|
||
| FlashInfer currently provides support for NVIDIA SM architectures 75 and higher and beta support for 103, 110, 120, and 121. | ||
|
|
||
| **Supported CUDA Versions:** 12.6, 12.8, 13.0, 13.1 |
There was a problem hiding this comment.
While adding the supported CUDA versions is a great improvement, it creates an inconsistency with the installation examples later in this file. The examples at lines 64-65 and 116-117 still refer to an outdated list of CUDA versions (cu128, cu129, or cu130). To prevent user confusion, please update these examples to align with the new list of supported versions (12.6, 12.8, 13.0, 13.1).
|
|
||
| - Python: 3.10, 3.11, 3.12, 3.13, 3.14 | ||
|
|
||
| - CUDA: 12.6, 12.8, 13.0, 13.1 |
There was a problem hiding this comment.
|
I would suggest skipping FlashInfer bot's internal comprehensive unit test because this only changes an |
…lashinfer-ai#2197) <!-- .github/pull_request_template.md --> ## 📌 Description Add this to public docs CUDA version support 12.6, 12.8, 13.0, 13.1 and state that our goal is to follow PyTorch support + latest CUDA version <!-- What does this PR do? Briefly describe the changes and why they’re needed. --> ## 🔍 Related Issues <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [x] Tests have been added or updated as needed. - [x] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Documentation** * Updated CUDA support documentation with explicit supported versions: 12.6, 12.8, 13.0, and 13.1. * Documented CUDA version prerequisites for installation. * Added clarification that supported CUDA versions align with PyTorch's officially supported versions and the latest CUDA release. * Enhanced GPU support section header for better clarity. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
📌 Description
Add this to public docs CUDA version support 12.6, 12.8, 13.0, 13.1 and state that our goal is to follow PyTorch support + latest CUDA version
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.