Skip to content

[WIP][xpu][fix] test_aot_inductor: Make caching_allocator_disabled device-agnostic#179659

Draft
etaf wants to merge 6 commits intogh/etaf/218/basefrom
gh/etaf/218/head
Draft

[WIP][xpu][fix] test_aot_inductor: Make caching_allocator_disabled device-agnostic#179659
etaf wants to merge 6 commits intogh/etaf/218/basefrom
gh/etaf/218/head

Conversation

@etaf
Copy link
Copy Markdown
Collaborator

@etaf etaf commented Apr 8, 2026

Stack from ghstack (oldest at bottom):

8 AOTInductor tests fail on XPU because caching_allocator_disabled()
(Intorduced by #177418) from torch.cuda.memory calls torch._C._cuda_cudaCachingAllocator_is_enabled()
which doesn't exist in XPU-only builds.

Replace the direct import of torch.cuda.caching_allocator_disabled with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @jataylo

[ghstack-poisoned]
etaf added a commit that referenced this pull request Apr 8, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).


ghstack-source-id: 81004c9
Pull-Request: #179659
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 8, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/179659

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 10 New Failures

As of commit 53e6b77 with merge base 2279133 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

[ghstack-poisoned]
@etaf etaf requested review from EikanWang and gujinghui as code owners April 9, 2026 01:54
etaf added a commit that referenced this pull request Apr 9, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

ghstack-source-id: caf3d25
Pull-Request: #179659
@etaf etaf marked this pull request as draft April 9, 2026 01:56
@etaf etaf changed the title [xpu][fix] test_aot_inductor: Make caching_allocator_disabled device-agnostic [WIP][xpu][fix] test_aot_inductor: Make caching_allocator_disabled device-agnostic Apr 9, 2026
@etaf etaf added the ciflow/binaries_wheel Trigger binary build and upload jobs for wheel on the PR label Apr 9, 2026
[ghstack-poisoned]
etaf added a commit that referenced this pull request Apr 9, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

ghstack-source-id: 340ee3f
Pull-Request: #179659
[ghstack-poisoned]
etaf added a commit that referenced this pull request Apr 9, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

ghstack-source-id: f567e0a
Pull-Request: #179659
[ghstack-poisoned]
etaf added a commit that referenced this pull request Apr 12, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

ghstack-source-id: a0c1904
Pull-Request: #179659
@etaf etaf removed the ciflow/binaries_wheel Trigger binary build and upload jobs for wheel on the PR label Apr 12, 2026
etaf added a commit that referenced this pull request Apr 12, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

ghstack-source-id: a0c1904
Pull-Request: #179659
[ghstack-poisoned]
etaf added a commit that referenced this pull request Apr 12, 2026
…agnostic

8 AOTInductor tests fail on XPU because `caching_allocator_disabled()`
  (Intorduced by #177418) from `torch.cuda.memory` calls `torch._C._cuda_cudaCachingAllocator_is_enabled()`
which doesn't exist in XPU-only builds.

Replace the direct import of `torch.cuda.caching_allocator_disabled` with a
device-aware wrapper that delegates to the CUDA implementation on CUDA builds
and acts as a no-op on other GPU backends (XPU, etc.).

ghstack-source-id: 5a4f221
Pull-Request: #179659
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants