Deprecate sm70 for cuda 12.8 binary#147607
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/147607
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 6970267 with merge base aade4fb ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Do not remove the support. 12.8 works just fine for these capabilities. Last night I compiled llama.cpp and ollama for sm_52. |
If we can fix the linking of large binaries for libtorch with |
|
It's not about a linking issue, but to drop deprecated architectures. |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
b800cb3 to
6970267
Compare
|
@pytorchmergebot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This statement seems misleading. The official NVIDIA release notes for 12.8.1 say "Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release." Frozen suggests to me that CUDA will continue to work on these architectures but no more fixes/features will come in the future. The release notes for CUDA 12.9 clarify that sm_50-sm_70 support will remain for 12.x and it will be dropped for the next major release. (none of this means the pytorch official build for CUDA >=12.8 has to continue to support sm_50-sm_70, I'm just clarifying that CUDA itself will continue to provide that support) |
follow up for https://github.com/pytorch/pytorch/pull/146265/files, dropping sm_70 as well, since "Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release."
#145570
cc @ptrblck @atalman @nWEIdia