[1/N]Port 3 distributed/_tools test cases to Intel GPU#159543
[1/N]Port 3 distributed/_tools test cases to Intel GPU#159543libohao1201 wants to merge 8 commits intopytorch:mainfrom
Conversation
We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 2. enabled XPU for some test path 3. skip some test cases which Intel GPU does not support
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159543
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 4995377 with merge base fc80f68 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
||
| @skipIfTorchDynamo("https://github.com/pytorch/pytorch/issues/115653") | ||
| @unittest.skipIf(not TEST_CUDA, "CUDA not available") | ||
| @unittest.skipIf(not torch.accelerator.is_available(), "Accelerator not available") |
There was a problem hiding this comment.
@unittest.skipIf(not TEST_CUDA and not TEST_XPU, "Neither CUDA or XPU is not available")
| ): | ||
| debug = False | ||
| dev = torch.device(torch.cuda.current_device()) | ||
| dev = torch.device(torch.accelerator.current_device_index()) |
There was a problem hiding this comment.
torch.acceleartor does not apply to cpu.
There was a problem hiding this comment.
But I think the cpu has been skipped by @skip_if_lt_x_gpu(2).
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 2. enabled XPU for some test path 3. skip some test cases which Intel GPU does not support
|
Successfully rebased |
70e2b22 to
63cf20f
Compare
|
@libohao1201 please help fix the lint error. |
…01/pytorch into libo/distributed_ut_p1
|
To add the ciflow label This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows. |
|
@pytorchbot merge |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@libohao1201 you need to sign EasyCLA before land this PR. |
Done. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
For [#114850](#114850), we will port distributed tests to Intel GPU. We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 2. enabled XPU for some test path 3. skip some test cases which Intel GPU does not support Pull Request resolved: #159543 Approved by: https://github.com/guangyey, https://github.com/d4l3k Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
For [#114850](#114850), we will port distributed tests to Intel GPU. We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 2. enabled XPU for some test path 3. skip some test cases which Intel GPU does not support Pull Request resolved: #159543 Approved by: https://github.com/guangyey, https://github.com/d4l3k Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
For [pytorch#114850](pytorch#114850), we will port distributed tests to Intel GPU. We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 2. enabled XPU for some test path 3. skip some test cases which Intel GPU does not support Pull Request resolved: pytorch#159543 Approved by: https://github.com/guangyey, https://github.com/d4l3k Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
For [pytorch#114850](pytorch#114850), we will port distributed tests to Intel GPU. We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 2. enabled XPU for some test path 3. skip some test cases which Intel GPU does not support Pull Request resolved: pytorch#159543 Approved by: https://github.com/guangyey, https://github.com/d4l3k Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
For #114850, we will port distributed tests to Intel GPU.
We could enable Intel GPU with following methods and try the best to keep the original code styles:
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta