Skip to content

[UX] Put CUDA attention backend selection log into one line#29337

Merged
vllm-bot merged 1 commit intovllm-project:mainfrom
neuralmagic:one-cuda-attn-log
Nov 25, 2025
Merged

[UX] Put CUDA attention backend selection log into one line#29337
vllm-bot merged 1 commit intovllm-project:mainfrom
neuralmagic:one-cuda-attn-log

Conversation

@mgoin
Copy link
Copy Markdown
Member

@mgoin mgoin commented Nov 24, 2025

Purpose

Before:

(EngineCore_DP0 pid=3645964) INFO 11-24 20:03:41 [cuda.py:410] Valid backends: ['FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION']
(EngineCore_DP0 pid=3645964) INFO 11-24 20:03:41 [cuda.py:419] Using FLASH_ATTN backend.

After:

(EngineCore_DP0 pid=3642945) INFO 11-24 20:03:06 [cuda.py:416] Using FLASH_ATTN attention backend out of potential backends: ['FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION']

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: mgoin <mgoin64@gmail.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the logging for CUDA attention backend selection by consolidating two log messages into a single, more informative line. This is a good user experience improvement as it makes the logs more concise and easier to parse. The implementation is correct and looks good.

selected_backend = valid_backends_priorities[selected_index][0]
logger.info(
"Using %s backend.",
"Using %s attention backend out of potential backends: %s",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we only do this for the first rank?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not if we keep the list. scope="local" only works with _once logs and if I try info_once I get TypeError: unhashable type: 'list'
We could extend the scope support to non-_once methods, but currently we aren't overriding those at all in vllm/logger.py

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 24, 2025
@vllm-bot vllm-bot merged commit dbc3d99 into vllm-project:main Nov 25, 2025
48 checks passed
@github-project-automation github-project-automation bot moved this to Done in NVIDIA Nov 25, 2025
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
kitaekatt pushed a commit to kitaekatt/vllm that referenced this pull request Dec 1, 2025
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
…ject#29337)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

nvidia ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants