[Chore][CI] Upgrade CI base image to CUDA 13.0#2981
Merged
sammshen merged 1 commit intoLMCache:devfrom Apr 8, 2026
Merged
Conversation
maobaolong
approved these changes
Apr 8, 2026
Collaborator
maobaolong
left a comment
There was a problem hiding this comment.
I aware this issue too, LGTM for this fix, thanks!
Shaoting-Feng
approved these changes
Apr 8, 2026
Collaborator
|
can we force merge this pr, current ci is blocking this ci fix lol. |
vLLM nightly now requires PyTorch 2.11.0 (CUDA 13.0). Update the CI base image from cuda-dl-base:25.04-cuda12.9 to nvidia/cuda:13.0.2. Install libcudart12 for backward compat since vLLM's compiled _C extension still links against libcudart.so.12. Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Contributor
Author
|
fixed, CI back online |
deng451e
approved these changes
Apr 8, 2026
Oasis-Git
pushed a commit
to Oasis-Git/LMCache
that referenced
this pull request
Apr 13, 2026
vLLM nightly now requires PyTorch 2.11.0 which is built against CUDA 13.0. Update the CI base image to match. Signed-off-by: Samuel Shen <slshen@uchciago.edu> Co-authored-by: Samuel Shen <slshen@uchciago.edu>
ApostaC
added a commit
to ApostaC/LMCache
that referenced
this pull request
Apr 16, 2026
The base image was bumped to nvidia/cuda:13.0.2-devel-ubuntu24.04 in LMCache#2981, but setup-env.sh installs vLLM from the generic nightly index (wheels.vllm.ai/nightly/vllm/), which resolves non-deterministically to either a cu128 or a cu130 torch wheel. When the resolver picks cu128, torch.utils.cpp_extension._check_cuda_version aborts the LMCache editable install with: RuntimeError: The detected CUDA version (13.0) mismatches the version that was used to compile PyTorch (12.8). LMCache#3055 tried to paper over this at runtime by apt-installing cuda-compiler-12-8 on mismatch and pointing CUDA_HOME at /usr/local/cuda-12.8, but cuda-compiler-*-* only ships nvcc -- not the CUDA math-library dev headers (cusparse.h, cublas.h, etc.). The build then fails deeper with a cryptic: fatal error: cusparse.h: No such file or directory Every k3 build on dev since 10fd636 (2026-04-16 09:55 UTC) has failed this way because vLLM nightly happens to be publishing cu128 torch today. Extending the apt install list is a band-aid; the real fix is to stop resolving torch non-deterministically. Pin the install to vLLM's per-CUDA-major cu130 sub-index per the official vllm.ai install instructions. torch.version.cuda is now deterministically "13.0" and matches system nvcc 13, so _check_cuda_version passes with the system toolchain -- no CUDA_HOME override, no apt install, no HTML scraping. Also drop the HTML index scraper (no longer needed with the proper --extra-index-url flags) and replace the runtime alignment block with a small Python sanity check that fails with a clear message if the pin ever drifts again, so future breakage surfaces here instead of inside ninja. - Unblocks every k3 pipeline on dev. - Removes ~40 lines of CUDA version-alignment logic. - Keeps the pandas auto-heal loop from LMCache#3055 (that problem is independent of this one). Signed-off-by: Yihua Cheng <yihua98@uchicago.edu>
2 tasks
sammshen
pushed a commit
that referenced
this pull request
Apr 16, 2026
…age (#3061) The base image was bumped to nvidia/cuda:13.0.2-devel-ubuntu24.04 in #2981, but setup-env.sh installs vLLM from the generic nightly index (wheels.vllm.ai/nightly/vllm/), which resolves non-deterministically to either a cu128 or a cu130 torch wheel. When the resolver picks cu128, torch.utils.cpp_extension._check_cuda_version aborts the LMCache editable install with: RuntimeError: The detected CUDA version (13.0) mismatches the version that was used to compile PyTorch (12.8). #3055 tried to paper over this at runtime by apt-installing cuda-compiler-12-8 on mismatch and pointing CUDA_HOME at /usr/local/cuda-12.8, but cuda-compiler-*-* only ships nvcc -- not the CUDA math-library dev headers (cusparse.h, cublas.h, etc.). The build then fails deeper with a cryptic: fatal error: cusparse.h: No such file or directory Every k3 build on dev since 10fd636 (2026-04-16 09:55 UTC) has failed this way because vLLM nightly happens to be publishing cu128 torch today. Extending the apt install list is a band-aid; the real fix is to stop resolving torch non-deterministically. Pin the install to vLLM's per-CUDA-major cu130 sub-index per the official vllm.ai install instructions. torch.version.cuda is now deterministically "13.0" and matches system nvcc 13, so _check_cuda_version passes with the system toolchain -- no CUDA_HOME override, no apt install, no HTML scraping. Also drop the HTML index scraper (no longer needed with the proper --extra-index-url flags) and replace the runtime alignment block with a small Python sanity check that fails with a clear message if the pin ever drifts again, so future breakage surfaces here instead of inside ninja. - Unblocks every k3 pipeline on dev. - Removes ~40 lines of CUDA version-alignment logic. - Keeps the pandas auto-heal loop from #3055 (that problem is independent of this one). Signed-off-by: Yihua Cheng <yihua98@uchicago.edu>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
vLLM nightly now requires PyTorch 2.11.0 which is built against CUDA 13.0. Update the CI base image to match.
What this PR does / why we need it:
Special notes for your reviewers:
If applicable:
Note
Medium Risk
Medium risk because it changes the CUDA base image and runtime libraries used by CI pods, which can break GPU-dependent builds/tests if the new CUDA 13 environment differs from previous images.
Overview
Updates the K3s Buildkite harness CI base image to CUDA 13 by switching the Docker base from NVIDIA’s CUDA DL image to
nvidia/cuda:13.0.2-devel-ubuntu24.04.Adjusts image setup to install
libcudart12and run a genericldconfig(removing the prior CUDA 12.9 compat path), aligning the CI environment with newer CUDA/PyTorch requirements.Reviewed by Cursor Bugbot for commit dcf764c. Bugbot is set up for automated code reviews on this repo. Configure here.