Skip to content

[Inductor UT][Fix XPU CI] Fix case failures introduced by community.#159759

Closed
etaf wants to merge 7 commits intogh/etaf/150/basefrom
gh/etaf/150/head
Closed

[Inductor UT][Fix XPU CI] Fix case failures introduced by community.#159759
etaf wants to merge 7 commits intogh/etaf/150/basefrom
gh/etaf/150/head

Conversation

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159759

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ You can merge normally! (1 Unrelated Failure)

As of commit 1e598db with merge base c03a734 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

  • pull / linux-jammy-py3_9-clang9-xla / test (xla, 1, 1, linux.12xlarge, unstable) (gh) (#158876)
    /var/lib/jenkins/workspace/xla/torch_xla/csrc/runtime/BUILD:476:14: Compiling torch_xla/csrc/runtime/xla_util_test.cpp failed: (Exit 1): gcc failed: error executing CppCompile command (from target //torch_xla/csrc/runtime:xla_util_test) /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 229 arguments skipped)

This comment was automatically generated by Dr. CI and updates every 15 minutes.

etaf added a commit that referenced this pull request Aug 4, 2025
@etaf etaf added the ciflow/xpu Run XPU CI tasks label Aug 4, 2025
… by community."

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
etaf added a commit that referenced this pull request Aug 4, 2025
@etaf etaf changed the title [Fix XPU][Inductor UT] Fix XPU CI case failures introduced by community. [Inductor UT] Fix XPU CI case failures introduced by community. Aug 4, 2025
@etaf etaf changed the title [Inductor UT] Fix XPU CI case failures introduced by community. [Inductor UT][XPU] Fix XPU CI case failures introduced by community. Aug 4, 2025
@etaf etaf requested a review from jansel August 4, 2025 16:05
@EikanWang
Copy link
Collaborator

@etaf , regarding the two failures, we need to update the torch-xpu-ops. So, it would be helpful to stack the torch-xpu-ops upgrade pr to make the CI signal of this PR green.

@etaf etaf added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 5, 2025
@etaf
Copy link
Collaborator Author

etaf commented Aug 5, 2025

@etaf , regarding the two failures, we need to update the torch-xpu-ops. So, it would be helpful to stack the torch-xpu-ops upgrade pr to make the CI signal of this PR green.

The above two windows build failures will be fixed in pytorch xpu windows build script by #159763, not related to torch-xpu-ops.

@etaf etaf changed the title [Inductor UT][XPU] Fix XPU CI case failures introduced by community. [Inductor UT][Fix XPU CI] Fix case failures introduced by community. Aug 5, 2025
@etaf etaf requested a review from albanD August 5, 2025 02:32
self.assertTrue(sample["a_tensor"].is_pinned())
self.assertTrue(sample["another_dict"]["a_number"].is_pinned())

@skipIfXpu
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fix is already landed in #159811

Copy link
Contributor

@jansel jansel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test failures look related?

@etaf
Copy link
Collaborator Author

etaf commented Aug 5, 2025

Test failures look related?

Hi, @jansel: The above two windows build failures are not related to this PR and will be fixed in pytorch xpu windows build script by #159763.

…community."


Fixes #159631

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
etaf added a commit that referenced this pull request Aug 5, 2025
@etaf etaf requested a review from jansel August 5, 2025 08:06
@albanD albanD removed their request for review August 5, 2025 18:28
…community."


Fixes #159631

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
etaf added a commit that referenced this pull request Aug 6, 2025
…community."


Fixes #159631

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

[ghstack-poisoned]
@jansel
Copy link
Contributor

jansel commented Aug 6, 2025

You should rebase this PR past the fix, we shouldn't break trunk.

@etaf
Copy link
Collaborator Author

etaf commented Aug 6, 2025

You should rebase this PR past the fix, we shouldn't break trunk.

Oh I know what you mean, thanks for the reminder.

…community."


Fixes #159631

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

[ghstack-poisoned]
…community."


Fixes #159631

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

[ghstack-poisoned]
@etaf
Copy link
Collaborator Author

etaf commented Aug 6, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Aug 8, 2025
…ch.device("cuda")`. (#159926)

```
For example, detect the following situation:
>>>Lint for test/dynamo/test_modes.py:
  Error (TEST_DEVICE_BIAS) [device-bias]
    `@requires_gpu` function should not hardcode `with torch.device('cuda')`,
    suggest to use torch.device(GPU_TYPE)

        687  |            flex_attention as flex_attention_eager,
        688  |        )
        689  |
    >>> 690  |        with torch.device("cuda"):
        691  |            flex_attention = torch.compile(flex_attention_eager, dynamic=False)
        692  |
        693  |            with self.assertRaisesRegex(
```

Pull Request resolved: #159926
Approved by: https://github.com/EikanWang, https://github.com/jansel
ghstack dependencies: #159759
hinriksnaer pushed a commit to hinriksnaer/pytorch that referenced this pull request Aug 8, 2025
…ch.device("cuda")`. (pytorch#159926)

```
For example, detect the following situation:
>>>Lint for test/dynamo/test_modes.py:
  Error (TEST_DEVICE_BIAS) [device-bias]
    `@requires_gpu` function should not hardcode `with torch.device('cuda')`,
    suggest to use torch.device(GPU_TYPE)

        687  |            flex_attention as flex_attention_eager,
        688  |        )
        689  |
    >>> 690  |        with torch.device("cuda"):
        691  |            flex_attention = torch.compile(flex_attention_eager, dynamic=False)
        692  |
        693  |            with self.assertRaisesRegex(
```

Pull Request resolved: pytorch#159926
Approved by: https://github.com/EikanWang, https://github.com/jansel
ghstack dependencies: pytorch#159759
@github-actions github-actions bot deleted the gh/etaf/150/head branch September 6, 2025 02:07
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
…ch.device("cuda")`. (pytorch#159926)

```
For example, detect the following situation:
>>>Lint for test/dynamo/test_modes.py:
  Error (TEST_DEVICE_BIAS) [device-bias]
    `@requires_gpu` function should not hardcode `with torch.device('cuda')`,
    suggest to use torch.device(GPU_TYPE)

        687  |            flex_attention as flex_attention_eager,
        688  |        )
        689  |
    >>> 690  |        with torch.device("cuda"):
        691  |            flex_attention = torch.compile(flex_attention_eager, dynamic=False)
        692  |
        693  |            with self.assertRaisesRegex(
```

Pull Request resolved: pytorch#159926
Approved by: https://github.com/EikanWang, https://github.com/jansel
ghstack dependencies: pytorch#159759
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants