[Hotfix][CI] Pin vLLM nightly to cu130 index to match CUDA 13 base image#3061
Merged
sammshen merged 1 commit intoLMCache:devfrom Apr 16, 2026
Merged
Conversation
The base image was bumped to nvidia/cuda:13.0.2-devel-ubuntu24.04 in LMCache#2981, but setup-env.sh installs vLLM from the generic nightly index (wheels.vllm.ai/nightly/vllm/), which resolves non-deterministically to either a cu128 or a cu130 torch wheel. When the resolver picks cu128, torch.utils.cpp_extension._check_cuda_version aborts the LMCache editable install with: RuntimeError: The detected CUDA version (13.0) mismatches the version that was used to compile PyTorch (12.8). LMCache#3055 tried to paper over this at runtime by apt-installing cuda-compiler-12-8 on mismatch and pointing CUDA_HOME at /usr/local/cuda-12.8, but cuda-compiler-*-* only ships nvcc -- not the CUDA math-library dev headers (cusparse.h, cublas.h, etc.). The build then fails deeper with a cryptic: fatal error: cusparse.h: No such file or directory Every k3 build on dev since 10fd636 (2026-04-16 09:55 UTC) has failed this way because vLLM nightly happens to be publishing cu128 torch today. Extending the apt install list is a band-aid; the real fix is to stop resolving torch non-deterministically. Pin the install to vLLM's per-CUDA-major cu130 sub-index per the official vllm.ai install instructions. torch.version.cuda is now deterministically "13.0" and matches system nvcc 13, so _check_cuda_version passes with the system toolchain -- no CUDA_HOME override, no apt install, no HTML scraping. Also drop the HTML index scraper (no longer needed with the proper --extra-index-url flags) and replace the runtime alignment block with a small Python sanity check that fails with a clear message if the pin ever drifts again, so future breakage surfaces here instead of inside ninja. - Unblocks every k3 pipeline on dev. - Removes ~40 lines of CUDA version-alignment logic. - Keeps the pandas auto-heal loop from LMCache#3055 (that problem is independent of this one). Signed-off-by: Yihua Cheng <yihua98@uchicago.edu>
Contributor
There was a problem hiding this comment.
Code Review
This pull request simplifies the environment setup in .buildkite/k3_harness/setup-env.sh by pinning vLLM and PyTorch installations to the cu130 index. This change replaces complex HTML scraping and dynamic cuda-compiler installation with a direct installation and a Python-based sanity check to ensure the PyTorch CUDA version matches the system nvcc. I have no feedback to provide.
deng451e
approved these changes
Apr 16, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it:
Every k3 build on
devsince 10fd636 (2026-04-16 09:55 UTC, merge of #3055) has failed at the LMCache editable install with:Root cause is a two-hop CUDA-version mismatch that the alignment path added in #3055 can't paper over:
nvidia/cuda:13.0.2-devel-ubuntu24.04in [Chore][CI] Upgrade CI base image to CUDA 13.0 #2981 (systemnvcc= 13).setup-env.shinstalls vLLM from the generic nightly index (wheels.vllm.ai/nightly/vllm/). That index serves bothcu128andcu130torch wheels on different days — today it's publishingcu128.torch.version.cuda="12.8"meets systemnvcc 13,torch.utils.cpp_extension._check_cuda_versionaborts withThe detected CUDA version (13.0) mismatches the version that was used to compile PyTorch (12.8).cuda-compiler-12-8and setsCUDA_HOME=/usr/local/cuda-12.8. Butcuda-compiler-*-*only shipsnvcc— it does not shiplibcusparse-dev,libcublas-dev, etc., whichtorch/include/ATen/cuda/CUDAContextLight.hpulls in via#include <cusparse.h>. So the build still fails, just later.Extending the apt install list is a band-aid; the real fix is to stop resolving torch non-deterministically. This PR pins the install to vLLM's per-CUDA-major cu130 sub-index per the official vllm.ai install instructions:
torch.version.cudais now deterministically"13.0"and matches the base image's systemnvcc 13._check_cuda_versionpasses with the system toolchain — noCUDA_HOMEoverride, no runtime apt install, no HTML scraping.What gets removed:
wheels.vllm.ai/nightly/vllm/to locate a wheel URL (no longer needed now that--extra-index-url+--index-strategy unsafe-best-matchresolves properly).:wrench: Aligning nvcc with torch's reported CUDA versionblock from [Hotfix][CI] Unblock CI: pandas auto-heal + CUDA 12 build toolchain #3055 (~40 lines:torch.version.cudadetection,nvcc --versionparsing, aptcuda-compiler-*-*install,CUDA_HOMEswitcheroo).What gets kept:
import pandas.wait_for_serverlog-dump improvement from [Hotfix][CI] Unblock CI: pandas auto-heal + CUDA 12 build toolchain #3055 (untouched by this PR).[runai,tensorizer,flashinfer]extras.What gets added:
torch.version.cudaagainstnvcc --versionand fails fast with a clear message if the cu130 pin ever drifts. The previous failure surfaced deep inside ninja ascusparse.h: No such file or directory, which took a while to track back to the real cause.Special notes for your reviewers:
CI_CUDA_TAG=${CI_CUDA_TAG:-cu130}). Not doing that now to keep the PR focused.uv pip install --dry-runagainst the cu130 indexes; the resolver picksvllm-*.dev*+cu130andtorch-*+cu130as expected.If applicable:
Refs #3055, #2981.
Note
Low Risk
CI-only changes that primarily adjust dependency resolution and add a sanity check; main risk is install breakage if the
cu130nightly index is unavailable or changes.Overview
Fixes flaky k3 CI environment setup by pinning
uv pip installof vLLM nightlies to the CUDA 13.0 (cu130) indexes (vLLM + PyTorch), avoiding non-deterministic resolver picks that could pull acu128torch wheel.Removes the prior HTML wheel scraping and the runtime apt-based
nvccalignment/CUDA_HOMEoverride logic, and replaces it with a small fail-fast check that verifiestorch.version.cudamatches the systemnvccmajor version before installing LMCache from source.Reviewed by Cursor Bugbot for commit 86abd35. Bugbot is set up for automated code reviews on this repo. Configure here.