[V2] Remove python_default_init from ATen and use Optional#15234
[V2] Remove python_default_init from ATen and use Optional#15234wanchaol wants to merge 7 commits intopytorch:masterfrom
Conversation
fb550db to
15bd886
Compare
cb50124 to
5508f04
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
aten/src/ATen/native/ReduceOps.cpp
Outdated
There was a problem hiding this comment.
nit: I slightly prefer "pOpt" to "p_" (and a bunch of other places in this PR), because it makes it obvious which I'm using (it's easy to miss the "_").
aten/src/ATen/native/SpectralOps.cpp
Outdated
There was a problem hiding this comment.
same here, prefer "Opt" suffix.
test/common_nn.py
Outdated
There was a problem hiding this comment.
nit: don't make only whitespace changes to files.
tools/autograd/derivatives.yaml
Outdated
There was a problem hiding this comment.
this function is a little weird with the positional argument in the middle -- if you look at the documentation it's documented as "torch.norm(input, p='fro', dim=None, keepdim=False, out=None)" -- which implies that dim and keepdim should be optional as well. What is the plan here?
There was a problem hiding this comment.
Yeah that's true for dim, not keepdim=False I suppose since it does not accept None. I think what was done before is that python_arg_parser parse these two signature:
norm(Scalar p =2)
norm(Scalar?p, int64_t dim, bool keepdim=False
for different calls, and these two have slightly different behaviors. I do think that we could merge these two and I will try to do it in a follow up PR.
There was a problem hiding this comment.
this looks pretty sketchy -- how does this work if p wasn't specified?
There was a problem hiding this comment.
This is a backward function, since p will also be specified in the forward call (either a value passed by the caller or the default 2), the backward function will always get the p_ I suppose. To make sure we get it, I will add a value_or here to specific the default value as well.
There was a problem hiding this comment.
the question I had was whether the above logic was right. I don't think it is because the user can explicitly pass a nullopt in C++.
There was a problem hiding this comment.
Yes that was sketchy actually, I updated it with the value_or semantic so that the case of p is not specified is correctly handled. Please review again when you have time :)
bee5a08 to
d74b5aa
Compare
d74b5aa to
27bf930
Compare
27bf930 to
412a7f9
Compare
gchanan
left a comment
There was a problem hiding this comment.
looks good. Just a note in the future it's nicer if you do merges instead of force pushes because github doesn't really track review comments across force pushes.
|
sounds good, thanks for the suggestion! |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Optional clean up. This PR remove python_default_init from the yaml files, and the code-gen, and utilize optional type to do the work. This also fix the bug in the #13149 to correctly adopt as_strided backward. Fixes #9941 Pull Request resolved: pytorch/pytorch#15234 Differential Revision: D13502044 Pulled By: wanchaol fbshipit-source-id: 774b61fc4414482cf11d56e22bd0275aefb352a4
Summary: This PR does three things: ~~Allow `int64_t?` in function schema, which provide an elegant way of implementing null-able int arguments, as discussed in #15208 (review) ~~Originally implemented in #15235 ~~Example:~~ ```yaml - func: myop(Tensor self, int64_t? dim=None) -> Tensor variants: function ``` ~~cc: zou3519~~ Edit: implemented in #15234 Previously tried in #12064. There was a problem that C++ does not have kwarg support, which makes it confusing to know whether `unique(t, 1)` actually means `unique(t, dim=1)` or `unique(t, sorted=1)`. Now I think I have a better idea on how to implement this: there are two ATen operators: `unique` and `unique_dim`. `unique` has the same signature as in python, and exported to both python and C++. `unique_dim` has signature `unique_dim(tensor, dim, sorted=False, return_inverse=False)`, and only exported to C++, which could be used more naturally for a C++ user. Differential Revision: D13540278 Pulled By: wanchaol fbshipit-source-id: 3768c76a90b0881f565a1f890459ebccbdfe6ecd
Summary: This PR does three things: ~~Allow `int64_t?` in function schema, which provide an elegant way of implementing null-able int arguments, as discussed in pytorch/pytorch#15208 (review) ~~Originally implemented in pytorch/pytorch#15235 ~~Example:~~ ```yaml - func: myop(Tensor self, int64_t? dim=None) -> Tensor variants: function ``` ~~cc: zou3519~~ Edit: implemented in pytorch/pytorch#15234 Previously tried in pytorch/pytorch#12064. There was a problem that C++ does not have kwarg support, which makes it confusing to know whether `unique(t, 1)` actually means `unique(t, dim=1)` or `unique(t, sorted=1)`. Now I think I have a better idea on how to implement this: there are two ATen operators: `unique` and `unique_dim`. `unique` has the same signature as in python, and exported to both python and C++. `unique_dim` has signature `unique_dim(tensor, dim, sorted=False, return_inverse=False)`, and only exported to C++, which could be used more naturally for a C++ user. Differential Revision: D13540278 Pulled By: wanchaol fbshipit-source-id: 3768c76a90b0881f565a1f890459ebccbdfe6ecd
Optional clean up. This PR remove python_default_init from the yaml files, and the code-gen, and utilize optional type to do the work.
This also fix the bug in the #13149 to correctly adopt as_strided backward.
Fixes #9941