Add unified memory APIs for torch.accelerator#152932
Add unified memory APIs for torch.accelerator#152932guangyey wants to merge 60 commits intogh/guangyey/145/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152932
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 3 Unrelated FailuresAs of commit 63f2a36 with merge base 178515d ( NEW FAILURE - The following job has failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Starting merge as part of PR stack under #155200 |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 7 checks: Check Labels / Check labels, Check mergeability of ghstack PR / ghstack-mergeability-check, pull / linux-jammy-py3_9-clang9-xla / test (xla, 1, 1, linux.12xlarge, unstable), xpu / linux-jammy-xpu-2025.1-py3.9 / test (default, 2, 6, linux.idc.xpu), xpu / linux-jammy-xpu-2025.1-py3.9 / test (default, 5, 6, linux.idc.xpu), rocm / linux-jammy-rocm-py3.10 / test (default, 2, 6, linux.rocm.gpu.2), rocm / linux-jammy-rocm-py3.10 / test (default, 1, 6, linux.rocm.gpu.2) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Starting merge as part of PR stack under #155200 |
Pull Request resolved: #155200 Approved by: https://github.com/albanD ghstack dependencies: #138222, #152932
This reverts commit 15f1173. Reverted #152932 on behalf of https://github.com/jithunnair-amd due to Broke ROCm periodic runs on MI300 e.g. https://github.com/pytorch/pytorch/actions/runs/16764977800/job/47470050573 ([comment](#138222 (comment)))
|
Starting merge as part of PR stack under #155200 |
Pull Request resolved: #155200 Approved by: https://github.com/albanD ghstack dependencies: #138222, #152932
# Motivation The following API will be put under torch.accelerator - empty_cache - max_memory_allocated - max_memory_reserved - memory_allocated - memory_reserved - memory_stats - reset_accumulated_memory_stats - reset_peak_memory_stats Pull Request resolved: pytorch#152932 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222
Pull Request resolved: pytorch#155200 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222, pytorch#152932
# Motivation The following API will be put under torch.accelerator - empty_cache - max_memory_allocated - max_memory_reserved - memory_allocated - memory_reserved - memory_stats - reset_accumulated_memory_stats - reset_peak_memory_stats Pull Request resolved: pytorch#152932 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222
Pull Request resolved: pytorch#155200 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222, pytorch#152932
This reverts commit 15f1173. Reverted pytorch#152932 on behalf of https://github.com/jithunnair-amd due to Broke ROCm periodic runs on MI300 e.g. https://github.com/pytorch/pytorch/actions/runs/16764977800/job/47470050573 ([comment](pytorch#138222 (comment)))
# Motivation The following API will be put under torch.accelerator - empty_cache - max_memory_allocated - max_memory_reserved - memory_allocated - memory_reserved - memory_stats - reset_accumulated_memory_stats - reset_peak_memory_stats Pull Request resolved: pytorch#152932 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222
Pull Request resolved: pytorch#155200 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222, pytorch#152932
Stack from ghstack (oldest at bottom):
Motivation
The following API will be put under torch.accelerator
cc @albanD @EikanWang