Libtorch CUDA 12.8 Test with --host-linker-script=use-lcs#146084
Libtorch CUDA 12.8 Test with --host-linker-script=use-lcs#146084tinglvv wants to merge 13 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146084
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit 1515b5f with merge base bfaf76b ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Alternatively, we may need to use gen-lcs if PyTorch uses it's own linker scripts outside of NVCC: Hmm... |
|
Hi @Skylion007, build failure was due to usage of many Linux container in the pre-cxx11 build. Andrey just removed it yesterday. #146200 Let's see the result of the cxx11 build for libtorch. |
|
Oh we probably need to rebase too then, right? |
| TORCH_CUDA_ARCH_LIST="5.0;6.0;7.0;7.5;8.0;8.6" | ||
| case ${CUDA_VERSION} in | ||
| 12.8) | ||
| TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST};9.0;10.0;12.0+PTX" #Ripping out 5.0 and 6.0 due to ld error |
There was a problem hiding this comment.
| TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST};9.0;10.0;12.0+PTX" #Ripping out 5.0 and 6.0 due to ld error | |
| TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST};9.0;10.0;12.0+PTX" |
| export USE_STATIC_CUDNN=0 | ||
| # Try parallelizing nvcc as well | ||
| export TORCH_NVCC_FLAGS="-Xfatbin -compress-all --threads 2" | ||
| export TORCH_NVCC_FLAGS="-Xfatbin -compress-all --threads 2 --host-linker-script=use-lcs" |
There was a problem hiding this comment.
We should probably modify it to just append --threads 2 to TORCH_NVCC_FLAGS if possible :)
|
@tinglvv Seems like linker scripts have fixed all the other wheels aside from the preCXX11 and CXX-ABI ones! |
|
rebase was a bit hard due to the file was removed. Starting a new pr. |
#145570
Adding libtorch build to nightlies
Follow up for #145792
Testing @Skylion007 's suggestion in #145792 (comment)
cc @atalman @malfet @ptrblck @nWEIdia