[Inductor UT][Fix XPU CI] Fix case failures introduced by community.#159759
[Inductor UT][Fix XPU CI] Fix case failures introduced by community.#159759etaf wants to merge 7 commits intogh/etaf/150/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159759
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 1e598db with merge base c03a734 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… by community." cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben [ghstack-poisoned]
|
@etaf , regarding the two failures, we need to update the torch-xpu-ops. So, it would be helpful to stack the torch-xpu-ops upgrade pr to make the CI signal of this PR green. |
The above two windows build failures will be fixed in pytorch xpu windows build script by #159763, not related to torch-xpu-ops. |
| self.assertTrue(sample["a_tensor"].is_pinned()) | ||
| self.assertTrue(sample["another_dict"]["a_number"].is_pinned()) | ||
|
|
||
| @skipIfXpu |
jansel
left a comment
There was a problem hiding this comment.
Test failures look related?
…community." Fixes #159631 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben [ghstack-poisoned]
…community." Fixes #159631 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben [ghstack-poisoned]
…community." Fixes #159631 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela [ghstack-poisoned]
|
You should rebase this PR past the fix, we shouldn't break trunk. |
Oh I know what you mean, thanks for the reminder. |
…community." Fixes #159631 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela [ghstack-poisoned]
…community." Fixes #159631 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ch.device("cuda")`. (#159926)
```
For example, detect the following situation:
>>>Lint for test/dynamo/test_modes.py:
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode `with torch.device('cuda')`,
suggest to use torch.device(GPU_TYPE)
687 | flex_attention as flex_attention_eager,
688 | )
689 |
>>> 690 | with torch.device("cuda"):
691 | flex_attention = torch.compile(flex_attention_eager, dynamic=False)
692 |
693 | with self.assertRaisesRegex(
```
Pull Request resolved: #159926
Approved by: https://github.com/EikanWang, https://github.com/jansel
ghstack dependencies: #159759
…ch.device("cuda")`. (pytorch#159926)
```
For example, detect the following situation:
>>>Lint for test/dynamo/test_modes.py:
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode `with torch.device('cuda')`,
suggest to use torch.device(GPU_TYPE)
687 | flex_attention as flex_attention_eager,
688 | )
689 |
>>> 690 | with torch.device("cuda"):
691 | flex_attention = torch.compile(flex_attention_eager, dynamic=False)
692 |
693 | with self.assertRaisesRegex(
```
Pull Request resolved: pytorch#159926
Approved by: https://github.com/EikanWang, https://github.com/jansel
ghstack dependencies: pytorch#159759
…ytorch#159759) Fixes pytorch#159631 Pull Request resolved: pytorch#159759 Approved by: https://github.com/EikanWang, https://github.com/jansel
…ch.device("cuda")`. (pytorch#159926)
```
For example, detect the following situation:
>>>Lint for test/dynamo/test_modes.py:
Error (TEST_DEVICE_BIAS) [device-bias]
`@requires_gpu` function should not hardcode `with torch.device('cuda')`,
suggest to use torch.device(GPU_TYPE)
687 | flex_attention as flex_attention_eager,
688 | )
689 |
>>> 690 | with torch.device("cuda"):
691 | flex_attention = torch.compile(flex_attention_eager, dynamic=False)
692 |
693 | with self.assertRaisesRegex(
```
Pull Request resolved: pytorch#159926
Approved by: https://github.com/EikanWang, https://github.com/jansel
ghstack dependencies: pytorch#159759
Stack from ghstack (oldest at bottom):
with torch.device("cuda"). #159926Fixes #159631
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @Lucaskabela