Skip to content

[Hotfix][CI] Pin vLLM nightly to cu130 index to match CUDA 13 base image#3061

Merged
sammshen merged 1 commit intoLMCache:devfrom
ApostaC:fix/ci-pin-vllm-nightly-cu130
Apr 16, 2026
Merged

[Hotfix][CI] Pin vLLM nightly to cu130 index to match CUDA 13 base image#3061
sammshen merged 1 commit intoLMCache:devfrom
ApostaC:fix/ci-pin-vllm-nightly-cu130

Conversation

@ApostaC
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC commented Apr 16, 2026

What this PR does / why we need it:

Every k3 build on dev since 10fd636 (2026-04-16 09:55 UTC, merge of #3055) has failed at the LMCache editable install with:

/opt/venv/lib/python3.12/site-packages/torch/include/ATen/cuda/CUDAContextLight.h:10:10:
fatal error: cusparse.h: No such file or directory

Root cause is a two-hop CUDA-version mismatch that the alignment path added in #3055 can't paper over:

  1. The k3 base image was bumped to nvidia/cuda:13.0.2-devel-ubuntu24.04 in [Chore][CI] Upgrade CI base image to CUDA 13.0 #2981 (system nvcc = 13).
  2. setup-env.sh installs vLLM from the generic nightly index (wheels.vllm.ai/nightly/vllm/). That index serves both cu128 and cu130 torch wheels on different days — today it's publishing cu128.
  3. When torch.version.cuda="12.8" meets system nvcc 13, torch.utils.cpp_extension._check_cuda_version aborts with The detected CUDA version (13.0) mismatches the version that was used to compile PyTorch (12.8).
  4. The [Hotfix][CI] Unblock CI: pandas auto-heal + CUDA 12 build toolchain #3055 fallback apt-installs cuda-compiler-12-8 and sets CUDA_HOME=/usr/local/cuda-12.8. But cuda-compiler-*-* only ships nvcc — it does not ship libcusparse-dev, libcublas-dev, etc., which torch/include/ATen/cuda/CUDAContextLight.h pulls in via #include <cusparse.h>. So the build still fails, just later.

Extending the apt install list is a band-aid; the real fix is to stop resolving torch non-deterministically. This PR pins the install to vLLM's per-CUDA-major cu130 sub-index per the official vllm.ai install instructions:

uv pip install -U "vllm[...]" --pre \
    --extra-index-url https://wheels.vllm.ai/nightly/cu130 \
    --extra-index-url https://download.pytorch.org/whl/cu130 \
    --index-strategy unsafe-best-match

torch.version.cuda is now deterministically "13.0" and matches the base image's system nvcc 13. _check_cuda_version passes with the system toolchain — no CUDA_HOME override, no runtime apt install, no HTML scraping.

What gets removed:

  • HTML-scraping of wheels.vllm.ai/nightly/vllm/ to locate a wheel URL (no longer needed now that --extra-index-url + --index-strategy unsafe-best-match resolves properly).
  • The entire :wrench: Aligning nvcc with torch's reported CUDA version block from [Hotfix][CI] Unblock CI: pandas auto-heal + CUDA 12 build toolchain #3055 (~40 lines: torch.version.cuda detection, nvcc --version parsing, apt cuda-compiler-*-* install, CUDA_HOME switcheroo).

What gets kept:

What gets added:

  • A small Python sanity check that compares torch.version.cuda against nvcc --version and fails fast with a clear message if the cu130 pin ever drifts. The previous failure surfaced deep inside ninja as cusparse.h: No such file or directory, which took a while to track back to the real cause.

Special notes for your reviewers:

  • If vLLM's cu130 nightly index is briefly empty on some day (e.g. their nightly CI hiccuped), the install will fail loudly here rather than silently drifting into a broken state elsewhere — which I think is the better failure mode.
  • If/when we need to support a non-Blackwell, pre-CUDA-13 target again, the pin can be parameterized by an env var (e.g. CI_CUDA_TAG=${CI_CUDA_TAG:-cu130}). Not doing that now to keep the PR focused.
  • Tested the install command structure locally by dry-running uv pip install --dry-run against the cu130 indexes; the resolver picks vllm-*.dev*+cu130 and torch-*+cu130 as expected.

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Refs #3055, #2981.


Note

Low Risk
CI-only changes that primarily adjust dependency resolution and add a sanity check; main risk is install breakage if the cu130 nightly index is unavailable or changes.

Overview
Fixes flaky k3 CI environment setup by pinning uv pip install of vLLM nightlies to the CUDA 13.0 (cu130) indexes (vLLM + PyTorch), avoiding non-deterministic resolver picks that could pull a cu128 torch wheel.

Removes the prior HTML wheel scraping and the runtime apt-based nvcc alignment/CUDA_HOME override logic, and replaces it with a small fail-fast check that verifies torch.version.cuda matches the system nvcc major version before installing LMCache from source.

Reviewed by Cursor Bugbot for commit 86abd35. Bugbot is set up for automated code reviews on this repo. Configure here.

The base image was bumped to nvidia/cuda:13.0.2-devel-ubuntu24.04 in
LMCache#2981, but setup-env.sh installs vLLM from the generic nightly index
(wheels.vllm.ai/nightly/vllm/), which resolves non-deterministically to
either a cu128 or a cu130 torch wheel. When the resolver picks cu128,
torch.utils.cpp_extension._check_cuda_version aborts the LMCache
editable install with:

    RuntimeError: The detected CUDA version (13.0) mismatches the
    version that was used to compile PyTorch (12.8).

LMCache#3055 tried to paper over this at runtime by apt-installing
cuda-compiler-12-8 on mismatch and pointing CUDA_HOME at
/usr/local/cuda-12.8, but cuda-compiler-*-* only ships nvcc -- not the
CUDA math-library dev headers (cusparse.h, cublas.h, etc.). The build
then fails deeper with a cryptic:

    fatal error: cusparse.h: No such file or directory

Every k3 build on dev since 10fd636 (2026-04-16 09:55 UTC) has failed
this way because vLLM nightly happens to be publishing cu128 torch
today. Extending the apt install list is a band-aid; the real fix is
to stop resolving torch non-deterministically.

Pin the install to vLLM's per-CUDA-major cu130 sub-index per the
official vllm.ai install instructions. torch.version.cuda is now
deterministically "13.0" and matches system nvcc 13, so
_check_cuda_version passes with the system toolchain -- no CUDA_HOME
override, no apt install, no HTML scraping.

Also drop the HTML index scraper (no longer needed with the proper
--extra-index-url flags) and replace the runtime alignment block with
a small Python sanity check that fails with a clear message if the
pin ever drifts again, so future breakage surfaces here instead of
inside ninja.

- Unblocks every k3 pipeline on dev.
- Removes ~40 lines of CUDA version-alignment logic.
- Keeps the pandas auto-heal loop from LMCache#3055 (that problem is
  independent of this one).

Signed-off-by: Yihua Cheng <yihua98@uchicago.edu>
@ApostaC ApostaC added the full Run comprehensive tests on this PR label Apr 16, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request simplifies the environment setup in .buildkite/k3_harness/setup-env.sh by pinning vLLM and PyTorch installations to the cu130 index. This change replaces complex HTML scraping and dynamic cuda-compiler installation with a direct installation and a Python-based sanity check to ensure the PyTorch CUDA version matches the system nvcc. I have no feedback to provide.

Copy link
Copy Markdown
Contributor

@sammshen sammshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@sammshen sammshen enabled auto-merge (squash) April 16, 2026 17:31
@sammshen sammshen merged commit 43b62fe into LMCache:dev Apr 16, 2026
33 of 34 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants