Refactor test_jit_fuser legacy tests.#34983
Refactor test_jit_fuser legacy tests.#34983ZolotukhinM wants to merge 4 commits intogh/ZolotukhinM/196/basefrom
Conversation
[ghstack-poisoned]
Differential Revision: [D20520367](https://our.internmc.facebook.com/intern/diff/D20520367) [ghstack-poisoned]
Don't use argv/globals to specify which graph executor to use: instead, set the desired mode in test fixture. Differential Revision: [D20520367](https://our.internmc.facebook.com/intern/diff/D20520367) [ghstack-poisoned]
Previously we relied on global setting to specify, which executor to use. However, it didn't work as we expected, which can be demonstrated by the difference between the following two test runs: ``` # Run in the standard order > pytest test/test_jit_fuser.py test/test_jit_fuser_legacy.py test/test_jit_fuser.py ..................ss..sss.s.....s............. test/test_jit_fuser_legacy.py ..................ss..sss.sF....s............. # Run in the reversed order > pytest test/test_jit_fuser_legacy.py test/test_jit_fuser.py test/test_jit_fuser_legacy.py ............F.....s....ss.......s...FF........ test/test_jit_fuser.py ............F.....s....ss..F....s...FF........ ``` This PR actually makes the tests run with the desired executor and disables tests that happen to be failing. With this change: ``` # Run in the standard order > pytest test/test_jit_fuser.py test/test_jit_fuser_legacy.py test/test_jit_fuser.py ..................ss..sss.s.....s............. test/test_jit_fuser_legacy.py ..................s...ssssss....s...ss........ # Run in the reversed order > pytest test/test_jit_fuser_legacy.py test/test_jit_fuser.py test/test_jit_fuser_legacy.py ..................s...ssssss....s...ss........ test/test_jit_fuser.py ..................ss..sss.s.....s............. ``` Differential Revision: [D20520367](https://our.internmc.facebook.com/intern/diff/D20520367) [ghstack-poisoned]
Previously we relied on global setting to specify, which executor to use. However, it didn't work as we expected, which can be demonstrated by the difference between the following two test runs: ``` # Run in the standard order > pytest test/test_jit_fuser.py test/test_jit_fuser_legacy.py test/test_jit_fuser.py ..................ss..sss.s.....s............. test/test_jit_fuser_legacy.py ..................ss..sss.sF....s............. # Run in the reversed order > pytest test/test_jit_fuser_legacy.py test/test_jit_fuser.py test/test_jit_fuser_legacy.py ............F.....s....ss.......s...FF........ test/test_jit_fuser.py ............F.....s....ss..F....s...FF........ ``` This PR actually makes the tests run with the desired executor and disables tests that happen to be failing. With this change: ``` # Run in the standard order > pytest test/test_jit_fuser.py test/test_jit_fuser_legacy.py test/test_jit_fuser.py ..................ss..sss.s.....s............. test/test_jit_fuser_legacy.py ..................s...ssssss....s...ss........ # Run in the reversed order > pytest test/test_jit_fuser_legacy.py test/test_jit_fuser.py test/test_jit_fuser_legacy.py ..................s...ssssss....s...ss........ test/test_jit_fuser.py ..................ss..sss.s.....s............. ``` ghstack-source-id: b6d8f5e Pull Request resolved: #34983
| @unittest.skipIf(not RUN_CUDA_HALF, "no half support") | ||
| @unittest.skipIf(graph_executor_mode() != ProfilingMode.LEGACY, "no half support with profiling on") | ||
| def test_cuda_half(self): | ||
| if graph_executor_mode() == ProfilingMode.PROFILING: |
There was a problem hiding this comment.
this might be actually a bit confusing since folks including me are used to looking for skipIf decorators. What's the reason for switching?
There was a problem hiding this comment.
The reason is that skipIf decorators are executed early and thus don't do what we expect them to do in this case. The checks graph_executor_mode() == ... might be executed not immediately before the test is run and thus result in incorrect behaviour.
| if graph_executor_mode() == ProfilingMode.PROFILING: | ||
| self.skipTest("no half support with profiling on") | ||
| if graph_executor_mode() == ProfilingMode.LEGACY: | ||
| self.skipTest("broken with legacy executor too") |
There was a problem hiding this comment.
eerr.. shouldn't this test work with the legacy executor?
There was a problem hiding this comment.
It should, but apparently it was never tested and was broken :)
| @unittest.skipIf(not RUN_CUDA, "fuser requires CUDA") | ||
| def test_rand_broadcast_cuda(self): | ||
| if graph_executor_mode() != ProfilingMode.PROFILING: | ||
| self.skipTest("Passes only with profiling executor") |
There was a problem hiding this comment.
why did it stop passing with the legacy executor?
There was a problem hiding this comment.
I have no idea, but we never actually tested it for the reasons I mentioned previously.
| @unittest.skipIf(not RUN_CUDA_MULTI_GPU, "needs non-zero device") | ||
| @enable_cpu_fuser | ||
| def test_fusion_reuse_multi_gpu(self): | ||
| if graph_executor_mode() == ProfilingMode.LEGACY: |
There was a problem hiding this comment.
most tests should work with the legacy executor, i'm not sure i understand why there are failures now.
💊 CircleCI build failures summary and remediationsAs of commit 1587c86 (more details on the Dr. CI page):
🕵️ 6 new failures recognized by patternsThe following build failures do not appear to be due to upstream breakages:
|
Stack from ghstack:
Previously we relied on global setting to specify, which executor to
use. However, it didn't work as we expected, which can be demonstrated
by the difference between the following two test runs:
This PR actually makes the tests run with the desired executor and
disables tests that happen to be failing.
With this change:
Differential Revision: D20520367