OpInfo for nn.functional.avg_pool2d#62455
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit a2a23d5 (more details on the Dr. CI page):
1 failure not recognized by patterns:
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
@zou3519 and @heitorschueroff would you review this? |
| cases = (((1, 3, 9, 9), (3, 3), (1, 1), (1, ), True, False, 2), | ||
| ((1, 1, 4, 4), (2, 2), (1, 1), (0, ), False, True, -2), | ||
| ((1, 2, 6, 6), (4, 4), (2, 2), (2, ), True, True, 3)) |
There was a problem hiding this comment.
Let's add some more cases.
- Could we have one that is just a single kernel size
((1, 3, 9, 9),)? - avgpool_2d advertises that stride, padding, and kernel_size can be a single number, so we should test something like
((1, 3, 9, 9), 3, 1, 1) - Could we have a case where the two strides are not the same and another where the two paddings are not the same?
- We aren't testing division_override=None
There was a problem hiding this comment.
Thanks for taking a look @zou3519! I've addressed the suggestions in the latest commit. :)
| def sample_inputs_avgpool2d(op_info, device, dtype, requires_grad, **kwargs): | ||
| make_arg = partial(make_tensor, device=device, dtype=dtype, requires_grad=requires_grad) | ||
|
|
||
| cases = (((1, 3, 9, 9), (3, 3), (1, 1), (1, ), True, False, 2), |
There was a problem hiding this comment.
Can you add a comment describing what each element represents please
There was a problem hiding this comment.
Done, thanks for the suggestion @heitorschueroff!
| ((1, 1, 4, 4), (2, 2), (1, 1), (0, ), False, True, -2), | ||
| ((1, 2, 6, 6), (4, 4), (2, 2), (2, ), True, True, 3)) | ||
|
|
||
| return [SampleInput(make_arg(input_shape), args=(kernel_size, stride, padding, ceil_mode, |
There was a problem hiding this comment.
nit: can you do this with a regular for loop instead of a list comprehension for readability.
There was a problem hiding this comment.
I really like list comprehensions. I think whether or not to do a list comprehension or do a for-loop should not be something we suggest in reviews because it's a matter of personal taste and there is no official PyTorch Style Guide.
There was a problem hiding this comment.
Done, thanks for the suggestion, @heitorschueroff!
There was a problem hiding this comment.
I'm sorry, I saw your comment just now @zou3519! Since I had to add a single case without any args except kernel size as well, so I kept a generator() approach for now. Hope that's fine.
There was a problem hiding this comment.
I really like list comprehensions. I think whether or not to do a list comprehension or do a for-loop should not be something we suggest in reviews because it's a matter of personal taste and there is no official PyTorch Style Guide.
List comprehensions are great for short one-liners, but I also agree that it is a matter of preferred style and since we don't enforce a style guide it is ok to ignore this suggestion.
There was a problem hiding this comment.
Either style works, so this is fine :). I wouldn't say that one is nicer than the other because it depends on who is reading the code
There was a problem hiding this comment.
Thanks for the valuable inputs, @zou3519 and @heitorschueroff! I appreciate it. :)
heitorschueroff
left a comment
There was a problem hiding this comment.
@krshrimali This is looking great, thank you for the contribution! I think there is room to explore a little more with the test cases, for instance, it looks like all test cases have size 1 for batch dimension. However, as a first PR adding OpInfo this is already good. Let's wait for the test to run. Thanks!
Thanks, @heitorschueroff! I had two more cases in mind which I added in the recent commit:
cc: @zou3519 as well. Hope this looks complete now. Thank you! |
|
Gentle ping: @zou3519 and @heitorschueroff. The failures seem irrelevant to this PR, please let me know if there is anything else required here. Thanks for your reviews and help. |
|
@heitorschueroff has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
LGTM! We appreciate your contribution. |
|
@heitorschueroff merged this pull request in 5dbcd56. |
|
Reverting, causes flakiness (adaptive_avg_pool backward is nondeterministic) |
Please see pytorch/functorch#78
cc: @mruberry @zou3519