Skip to content

Decorator skipIfXpu disables tests when used on class #150779

@exclamaforte

Description

@exclamaforte

🐛 Describe the bug

skipIfXpu is used on classes, for example in test_autoheuristic.py:

@skipIfXpu(msg="AutoHeuristic doesn't currently work on the XPU stack")
class AutoHeuristicTest(TestCase):

If you try to run the tests:

(pytorch) $ python test_autoheuristic.py 

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

No tests found:

(pytorch) $ python test_autoheuristic.py --discover-tests
<unittest.suite.TestSuite tests=[]>

Running a class member function with skipIfXpu seems to work, however:

(pytorch) $ python test_aot_inductor.py -k test_fp8
../home/gabeferns/pt-envs/pytorch/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /home/gabeferns/pt-envs/pytorch/aten/src/ATen/Context.cpp:148.)
  torch._C._set_onednn_allow_tf32(_allow_tf32)
W0407 12:19:41.756000 1780910 torch/_export/__init__.py:67] +============================+
W0407 12:19:41.756000 1780910 torch/_export/__init__.py:68] |     !!!   WARNING   !!!    |
W0407 12:19:41.756000 1780910 torch/_export/__init__.py:69] +============================+
W0407 12:19:41.757000 1780910 torch/_export/__init__.py:70] torch._export.aot_compile()/torch._export.aot_load() is being deprecated, please switch to directly calling torch._inductor.aoti_compile_and_package(torch.export.export())/torch._inductor.aoti_load_package() instead.
stats [('calls_captured', 2), ('unique_graphs', 1)]
inductor [('async_compile_cache_miss', 1), ('extern_calls', 1), ('async_compile_cache_hit', 1)]
graph_break []
aten_mm_info [('aten._scaled_mm.default_s0_32_16', 1)]
./home/gabeferns/pt-envs/pytorch/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /home/gabeferns/pt-envs/pytorch/aten/src/ATen/Context.cpp:148.)
  torch._C._set_onednn_allow_tf32(_allow_tf32)
stats [('calls_captured', 2), ('unique_graphs', 1)]
inductor [('extern_calls', 1)]
graph_break []
aten_mm_info [('aten._scaled_mm.default_s0_32_16', 1)]
.
----------------------------------------------------------------------
Ran 4 tests in 12.083s

OK

Versions

h100 devserver

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @gujinghui @EikanWang @fengyuan14 @guangyey

Metadata

Metadata

Assignees

Labels

high prioritymodule: ciRelated to continuous integrationmodule: regressionIt used to work, and now it doesn'tmodule: testingIssues related to the torch.testing module (not tests)module: testsIssues related to tests (not the torch.testing module)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions