OpInfo: nn.functional.conv_transpose2d#62882
OpInfo: nn.functional.conv_transpose2d#62882krshrimali wants to merge 18 commits intopytorch:masterfrom
nn.functional.conv_transpose2d#62882Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit b480838 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
| Job | Step | Action |
|---|---|---|
| Fail if there were any warnings | 🔁 rerun |
1 job timed out:
pytorch_linux_xenial_py3_clang7_asan_test1
ci.pytorch.org: 1 failed
This comment was automatically generated by Dr. CI (expand for details).
Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group.
zou3519
left a comment
There was a problem hiding this comment.
This looks good! I had some suggestions for more test cases
| dtypesIfCPU=floating_types(), | ||
| dtypesIfCUDA=floating_types_and(torch.float16, torch.bfloat16), |
There was a problem hiding this comment.
dtype tests seem to be failing
There was a problem hiding this comment.
Apologies. This seems to be because conv_transpose2d only supports torch.bfloat16 for CUDA > 11.0 versions. Should be fixed in the latest commit.
| # Ordered as shapes for: input, weight, bias, stride, padding, output_padding, groups | ||
| cases = (((1, 3, 4, 4), (3, 3, 3, 3), (3), (2, 2), 2, (1, 1), 1), | ||
| ((2, 2, 4, 4), (2, 2, 4, 5), (4), (3, 3), 1, (2, 2), 2), | ||
| ((1, 1, 4, 5), (1, 1, 4, 3), (1), 2, 1, 1, 1), |
There was a problem hiding this comment.
(1) doesn't actually make a tuple, you want (1,)
There was a problem hiding this comment.
Thanks for the suggestion, @zou3519. I have now used (in the recent commit) a single number since it represents a shape, and is passed to make_arg which works for integers or tuples as well.
There was a problem hiding this comment.
Update: having an int there raises mypy errors (the combined type becomes builtins.object which is not iterable). Having a tuple there seems like a better solution (1,) instead of 1. Fixed in the recent commit. Thanks!
| make_arg = partial(make_tensor, device=device, dtype=dtype, requires_grad=requires_grad) | ||
|
|
||
| # Ordered as shapes for: input, weight, bias, stride, padding, output_padding, groups | ||
| cases = (((1, 3, 4, 4), (3, 3, 3, 3), (3), (2, 2), 2, (1, 1), 1), |
There was a problem hiding this comment.
Let's add some more test cases:
- could we modify some of the stride tuples to have different numbers?
- padding can be a tuple
- We're not testing the dilation argument (https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html)
- could we modify output_padding tuples to have different numbers?
There was a problem hiding this comment.
Thanks for the suggestions, @zou3519 - I've made the revisions. Please let me know if they sound good to you. :)
…thub.com/krshrimali/pytorch into opinfo/high_priority/nn/conv_transpose2d
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
This likely broke windows test (it's flaky and complaining about small mismatch) https://github.com/pytorch/pytorch/runs/3327143486, can you please adjust tolerance? |
Summary: See pytorch/functorch#78 and #54261. cc: mruberry zou3519 Chillee Pull Request resolved: #62882 Reviewed By: bdhirsh Differential Revision: D30280804 Pulled By: zou3519 fbshipit-source-id: e40cdf43e98c1f11e45df6b8bc13110b4d29c45f
Summary: Addresses comment: #62882 (comment). cc: mruberry ngimel Pull Request resolved: #63389 Reviewed By: mruberry Differential Revision: D30377481 Pulled By: ngimel fbshipit-source-id: 0fa21acc3503c259c9b27463e8555247c43d9e2e
Summary: Reference: #54261 Reference: pytorch/functorch#78 Mostly inspired from #62882 Pull Request resolved: #63517 Reviewed By: heitorschueroff Differential Revision: D30993855 Pulled By: zou3519 fbshipit-source-id: 7402f99addb4ef8f19c2ce1a09ed9006e737cc7e
See pytorch/functorch#78 and #54261.
cc: @mruberry @zou3519 @Chillee