Avoid bytecode hook and simplify TorchCompileWrapperWithCustomDipatch#25110
Avoid bytecode hook and simplify TorchCompileWrapperWithCustomDipatch#25110vllm-bot merged 1 commit intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request refactors the torch.compile wrapper to simplify its implementation. It removes the complex bytecode hooking mechanism in favor of the guard_filter_fn option available in newer PyTorch versions to drop guards. The new TorchCompileGuardsStripWrapper is much cleaner. The changes also include updating the support_torch_compile decorator and related tests to use the new wrapper. The overall change improves code clarity and maintainability. I've found one critical issue in the test suite that needs to be addressed.
4c055dc to
42e4dd4
Compare
6f72573 to
bcc0f99
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
|
@bigPYJ1151 is planning to fix the CPU torch issue in vLLM so once that's done, we can upgrade to torch==2.8 everywhere and merge this PR |
ProExpertProg
left a comment
There was a problem hiding this comment.
Looks good, can you reformat?
2b04ba5 to
1e420a3
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
1e420a3 to
3500329
Compare
3500329 to
84fbf7d
Compare
…patcher with TorchCompileGuardsStripWrapper Signed-off-by: Laith Sakka <lsakka@meta.com>
84fbf7d to
84b93a5
Compare
|
rebase again |
|
@ProExpertProg can you help me land it before another rebase is needed :) |
|
FYI the docs build in this PR was failing so now it's failing on We're working on a fix. |
|
Sorry about that, I looked at the error message and it looked unrelated, I should have checked nightly. |
|
Fixed by #28772 |
|
It was an issue with mkdocstrings which I've reported to them. For future reference, the docs are built for every commit on |
| compiled_model.compiled = False | ||
| TorchCompileWithNoGuardsWrapper.__init__(compiled_model) |
There was a problem hiding this comment.
This change broke tpu on v0.11.2, calling __init__ here results in an attempt to use VllmConfig when it's not defined, resulting in the following traceback:
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] WorkerProc hit an exception.
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] Traceback (most recent call last):
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 810, in worker_busy_loop
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] output = func(*args, **kwargs)
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] ^^^^^^^^^^^^^^^^^^^^^
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/v1/worker/tpu_worker.py", line 214, in determine_available_memory
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] self.model_runner.reset_dynamo_cache()
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/v1/worker/tpu_model_runner.py", line 1907, in reset_dynamo_cache
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] TorchCompileWithNoGuardsWrapper.__init__(compiled_model)
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/compilation/wrapper.py", line 91, in __init__
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] vllm_config = get_current_vllm_config()
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] ^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/config/vllm.py", line 1136, in get_current_vllm_config
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] return VllmConfig()
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] ^^^^^^^^^^^^
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/config/vllm.py", line 609, in __post_init__
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] current_platform.check_and_update_config(self)
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/platforms/tpu.py", line 165, in check_and_update_config
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] cache_config.block_size = PallasAttentionBackend.get_page_size(vllm_config) # type: ignore[assignment]
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] File "/home/runner/actions-runner/_work/nm-cicd/nm-cicd/.venv/lib/python3.12/site-packages/vllm/v1/attention/backends/pallas.py", line 162, in get_page_size
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] if vllm_config.model_config.max_model_len > 8192:
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] AttributeError: 'NoneType' object has no attribute 'max_model_len'
(Worker_TP2 pid=3714832) ERROR 12-04 10:29:38 [multiproc_executor.py:815] Traceback (most recent call last):
cc @mgoin
Two main changes:
Add an option to not use the bytecode hook! torch2.8 have a way to drop guard without the hack.
We will switch totally to not use in a different PR after internal validation of perf post landing this.
Since @zou3519 have concerns about the perf. which i highly doubt that there will be issues.
TorchCompileWrapperWithCustomDipatch has complexities that are not used in the code base.
the current usage is to by pass guard evaluation so I am just introducing a much simpler TorchCompileWithNoGuardsWrapper.
Next PR I will add a debug mode option to keep DS gaurds and fail if get violated.
performance:
How does this effect run-time?
I ran the following benchmarks:
vllm bench latency
--model Qwen/Qwen2-1.5B-Instruct
--input-len 128
--output-len 256
--num-iters 50
--dtype float16
after:
before:
vllm bench throughput --model Qwen/Qwen2-1.5B-Instruct --input-len 512 --output-len 128 --num-prompts 1000
after
before