[BE]: Update cusparselt to 0.7.1#155232
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155232
Note: Links to docs will display an error until the docs builds have been completed. ❌ 6 New Failures, 2 Unrelated FailuresAs of commit 6e68752 with merge base da1f898 ( NEW FAILURES - The following jobs have failed:
UNSTABLE - The following jobs are marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
d78ec5a to
4686a22
Compare
cd8626d to
e238d23
Compare
|
@nWEIdia Any thoughts here? |
This may have implications (and potentially complications) on the binary size increase for the upcoming v2.8. |
We dynamically link cuSparseLT and NVidia distributes it as a separate wheel so it shouldn't have any signficant binary sizes increases in our wheel |
Ah yes, just recalled that separation part, thanks! In this case it LGTM. |
|
LGTM if CI is green. Will also upgrade cusparseLt for cuda 12.9 builds in my PRs. |
|
@pytorchbot merge |
|
Just need @atalman to upload the binaries this one. |
Merge failedReason: Approvers from one of the following sets are needed:
|
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
e238d23 to
eb24121
Compare
| '$ORIGIN/../../nvidia/curand/lib' | ||
| '$ORIGIN/../../nvidia/cusolver/lib' | ||
| '$ORIGIN/../../nvidia/cusparse/lib' | ||
| '$ORIGIN/../../nvidia/cusparselt/lib' |
There was a problem hiding this comment.
Nvidia moved the cusparselt library to under /nvidia/ from 0.6.3 to 0.7
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 6 jobs have failed, first few of them are: linux-binary-manywheel / manywheel-py3_9-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_12-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_13-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_10-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_13t-rocm6_4-build / build Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 8 checks: docker-builds / docker-build (linux.12xlarge, pytorch-linux-jammy-py3-clang12-executorch), .github/workflows/pull.yml / linux-jammy-py3-clang12-executorch / build, linux-binary-manywheel / manywheel-py3_9-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_12-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_13-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_10-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_13t-rocm6_4-build / build, linux-binary-manywheel / manywheel-py3_11-rocm6_4-build / build Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Needed to support sparse operations on Blackwell, and implements new features for the library. Also optimizes library sizes vs 0.7