Skip to content

[FIX_FOR_VLLM_LATEST] Fix crash after the sampled_token_ids type change#575

Merged
iboiko-habana merged 1 commit intovllm-project:mainfrom
pawel-olejniczak:polejnix/fix_crash_after_changing_list_int_to_list_nd_array
Nov 17, 2025
Merged

[FIX_FOR_VLLM_LATEST] Fix crash after the sampled_token_ids type change#575
iboiko-habana merged 1 commit intovllm-project:mainfrom
pawel-olejniczak:polejnix/fix_crash_after_changing_list_int_to_list_nd_array

Conversation

@pawel-olejniczak
Copy link
Copy Markdown
Contributor

sampled_token_ids was changed from list[list[int]] to list[list[int]]:
vllm-project/vllm#26368

@pawel-olejniczak pawel-olejniczak force-pushed the polejnix/fix_crash_after_changing_list_int_to_list_nd_array branch from bb404c8 to eaf744a Compare November 17, 2025 11:31
@pawel-olejniczak pawel-olejniczak changed the title [FIX_FOR_VLLM_LATEST] Fix crash after the sampled_token_ids type change [FIX_FOR_VLLM_LATEST] [WIP]Fix crash after the sampled_token_ids type change Nov 17, 2025
@pawel-olejniczak pawel-olejniczak force-pushed the polejnix/fix_crash_after_changing_list_int_to_list_nd_array branch from eaf744a to 8bcb291 Compare November 17, 2025 12:54
Copy link
Copy Markdown
Collaborator

@iboiko-habana iboiko-habana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change "not sampled_ids" to "sampled_ids is None" in vllm_gaudi/v1/worker/hpu_model_runner.py, line 3362. as it was done for gpu

for req_idx, sampled_ids in enumerate(postprocessed_sampled_token_ids[:num_reqs]):
           if not sampled_ids:
               continue

…list[np.ndarray]

Signed-off-by: Paweł Olejniczak <polejniczakx@habana.ai>
@pawel-olejniczak pawel-olejniczak force-pushed the polejnix/fix_crash_after_changing_list_int_to_list_nd_array branch from 8bcb291 to a2dbdf5 Compare November 17, 2025 13:33
@pawel-olejniczak
Copy link
Copy Markdown
Contributor Author

Please change "not sampled_ids" to "sampled_ids is None" in vllm_gaudi/v1/worker/hpu_model_runner.py, line 3362. as it was done for gpu

for req_idx, sampled_ids in enumerate(postprocessed_sampled_token_ids[:num_reqs]):
           if not sampled_ids:
               continue

@iboiko-habana Done

@pawel-olejniczak pawel-olejniczak changed the title [FIX_FOR_VLLM_LATEST] [WIP]Fix crash after the sampled_token_ids type change [FIX_FOR_VLLM_LATEST] Fix crash after the sampled_token_ids type change Nov 17, 2025
@github-actions
Copy link
Copy Markdown

✅ CI Passed

All checks passed successfully against the following vllm commit:
1b82fb0ad3cea2e1a31da4fa20dd736a8a181089

@iboiko-habana iboiko-habana merged commit 9de420b into vllm-project:main Nov 17, 2025
38 checks passed
afierka-intel pushed a commit to afierka-intel/vllm-gaudi that referenced this pull request Nov 18, 2025
…ge (vllm-project#575)

sampled_token_ids was changed from list[list[int]] to list[list[int]]:
vllm-project/vllm#26368

Signed-off-by: Paweł Olejniczak <polejniczakx@habana.ai>
HolyFalafel pushed a commit to HolyFalafel/vllm-gaudi that referenced this pull request Feb 1, 2026
…ge (vllm-project#575)

sampled_token_ids was changed from list[list[int]] to list[list[int]]:
vllm-project/vllm#26368

Signed-off-by: Paweł Olejniczak <polejniczakx@habana.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants