Skip to content

Revert "[Improvement] Persist CUDA compat libraries paths to prevent reset on apt-get (#30784)"#31

Merged
wangshangsam merged 1 commit intomlperf-inf-mm-q3vl-v6.0from
wangshangsam/mlperf-inf-mm-q3vl-v6.0/revert-2a60ac9
Jan 25, 2026
Merged

Revert "[Improvement] Persist CUDA compat libraries paths to prevent reset on apt-get (#30784)"#31
wangshangsam merged 1 commit intomlperf-inf-mm-q3vl-v6.0from
wangshangsam/mlperf-inf-mm-q3vl-v6.0/revert-2a60ac9

Conversation

@wangshangsam
Copy link
Copy Markdown

@wangshangsam wangshangsam commented Jan 24, 2026

This reverts commit 2a60ac9.

Fixing vllm-project#32373 in the mlperf-inf-mm-q3vl-v6.0 branch.

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@wangshangsam wangshangsam self-assigned this Jan 24, 2026
@wangshangsam wangshangsam requested a review from zhandaz January 24, 2026 23:40
@wangshangsam wangshangsam added the bug Something isn't working label Jan 24, 2026
Copy link
Copy Markdown

@zhandaz zhandaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wangshangsam wangshangsam merged commit 9a4fc64 into mlperf-inf-mm-q3vl-v6.0 Jan 25, 2026
4 checks passed
zhandaz pushed a commit that referenced this pull request Jan 25, 2026
wangshangsam added a commit that referenced this pull request Jan 25, 2026
* [Docker][Dev] Fix libnccl-dev version for the CUDA 13.0.1 devel image

[Docker][Dev] Fix libnccl-dev version conflict for the CUDA 13.0.1 devel image

Further update

* feat: Support FA4 for mm-encoder-attn-backend for qwen models

* feat: Kernel warmup for vit fa4

* fix: Fix some minor conflicts due to the introduction of flash_attn.cute

* Revert "[Docker][Dev] Fix libnccl-dev version for the CUDA 13.0.1 devel image"

This reverts commit ab76b28.

* chore: Update requirements and revert README.md

* chore: Install git for flash_attn cute installation

* lint: Fix linting

* Revert "[Improvement] Persist CUDA compat libraries paths to prevent reset on `apt-get` (vllm-project#30784)" (#31)

This reverts commit 2a60ac9.

---------

Co-authored-by: Shang Wang <shangw@nvidia.com>
zhandaz pushed a commit that referenced this pull request Feb 4, 2026
zhandaz added a commit that referenced this pull request Feb 4, 2026
* [Docker][Dev] Fix libnccl-dev version for the CUDA 13.0.1 devel image

[Docker][Dev] Fix libnccl-dev version conflict for the CUDA 13.0.1 devel image

Further update

* feat: Support FA4 for mm-encoder-attn-backend for qwen models

* feat: Kernel warmup for vit fa4

* fix: Fix some minor conflicts due to the introduction of flash_attn.cute

* Revert "[Docker][Dev] Fix libnccl-dev version for the CUDA 13.0.1 devel image"

This reverts commit ab76b28.

* chore: Update requirements and revert README.md

* chore: Install git for flash_attn cute installation

* lint: Fix linting

* Revert "[Improvement] Persist CUDA compat libraries paths to prevent reset on `apt-get` (vllm-project#30784)" (#31)

This reverts commit 2a60ac9.

---------

Co-authored-by: Shang Wang <shangw@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants