replace onlyOnCPUAndCUDA with onlyNativeDeviceTypes#65201
replace onlyOnCPUAndCUDA with onlyNativeDeviceTypes#65201kshitij12345 wants to merge 34 commits intopytorch:masterfrom
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit d0ee5b1 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Codecov Report
@@ Coverage Diff @@
## master #65201 +/- ##
==========================================
+ Coverage 66.37% 66.40% +0.02%
==========================================
Files 739 735 -4
Lines 94299 94051 -248
==========================================
- Hits 62595 62451 -144
+ Misses 31704 31600 -104 |
|
|
||
| # Verifies that the inplace dunders (like idiv) actually are in place | ||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # UserWarning not triggered |
There was a problem hiding this comment.
Note the expected failure
|
|
||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # Runtime Error not raised for meta |
There was a problem hiding this comment.
Note the expected failure
| self.assertEqual(sample, torch.zeros([1, 1, 1, 2], device=device, dtype=dtype)) | ||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # RuntimeError: Unrecognized tensor type ID: Meta |
There was a problem hiding this comment.
Note the expected failure
| F.fractional_max_pool2d(x, (2, 2), output_size=output_size, _random_samples=samples) | ||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # RuntimeError: Unrecognized tensor type ID: Meta |
There was a problem hiding this comment.
Note the expected failure
|
|
||
| # Validates that each OpInfo specifies its forward and backward dtypes | ||
| # correctly for CPU and CUDA devices | ||
| @skipMeta |
There was a problem hiding this comment.
Note the skip (some tests pass and some tests fail). Hence the skip
| torch.bernoulli(torch.rand_like(x), out=x) | ||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # RuntimeError not raised |
There was a problem hiding this comment.
Note expectedFailureMeta
| ind.put_(ind.clone(), ind) | ||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # UserWarning not triggered |
There was a problem hiding this comment.
Note expectedFailureMeta
| ind.index_put_((ind.clone(),), ind) | ||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # UserWarning not triggered |
There was a problem hiding this comment.
Note expectedFailureMeta
| torch.masked_select(mask.clone(), mask, out=mask) | ||
|
|
||
| @onlyOnCPUAndCUDA | ||
| @expectedFailureMeta # RuntimeError not raised |
There was a problem hiding this comment.
Note expectedFailureMeta
|
|
||
| # Only runs the test on the CPU and CUDA (the native device types) | ||
| def onlyOnCPUAndCUDA(fn): | ||
| # Only runs the test on the native device type (currently CPU, CUDA, Meta) |
There was a problem hiding this comment.
Thanks for the review pointers ;)
mruberry
left a comment
There was a problem hiding this comment.
Thanks for being so thorough and adding the review comments, @kshitij12345!
This looks good to me (and it makes this decorator consistent with onlyCPU and onlyCUDA by removing the "On"). I'll make an internal FB announcement, too.
cc @ezyang -- is this what you were expecting?
fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator
The ASAN timeout is unrelated but this will need a rebease.
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Err, I don't know what I was expecting! Seems... reasonable, I guess? |
Is there a replacement we could use? |
The
@zou3519 Yes we can find a replacement -- but would you prefer to run meta, cpu, and cuda variants of your tests or maintain just running the cpu and cuda variants? |
Sorry for the delayed reply. I excised all the usages of onlyOnCPUAndCUDA in functorch (it turns out functorch wasn't actually using this decorator). |
|
Thanks for the update, @zou3519. @kshitij12345, this just needs a rebase and then ping me and I'll merge it! |
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
|
@mruberry this should be ready once the rocm build finishes. (Also maybe we should run it on ci-all workflow?) |
|
@mruberry gentle ping :) |
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Reference #53849
Replace
onlyOnCPUandCUDAwithonlyNativeDeviceTypeswhich includescpu, cuda and meta.