Skip to content

[V2] Remove python_default_init from ATen and use Optional#15234

Closed
wanchaol wants to merge 7 commits intopytorch:masterfrom
wanchaol:default_init_2
Closed

[V2] Remove python_default_init from ATen and use Optional#15234
wanchaol wants to merge 7 commits intopytorch:masterfrom
wanchaol:default_init_2

Conversation

@wanchaol
Copy link
Collaborator

@wanchaol wanchaol commented Dec 14, 2018

Optional clean up. This PR remove python_default_init from the yaml files, and the code-gen, and utilize optional type to do the work.

This also fix the bug in the #13149 to correctly adopt as_strided backward.

Fixes #9941

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Dec 14, 2018
@wanchaol wanchaol changed the title [V2] Remove python_default_init from ATen [V2] Remove python_default_init from ATen and use Optional Dec 15, 2018
@wanchaol wanchaol force-pushed the default_init_2 branch 2 times, most recently from cb50124 to 5508f04 Compare December 18, 2018 01:11
@wanchaol wanchaol requested review from gchanan, ssnl and zdevito December 18, 2018 01:12
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I slightly prefer "pOpt" to "p_" (and a bunch of other places in this PR), because it makes it obvious which I'm using (it's easy to miss the "_").

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, prefer "Opt" suffix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: don't make only whitespace changes to files.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is a little weird with the positional argument in the middle -- if you look at the documentation it's documented as "torch.norm(input, p='fro', dim=None, keepdim=False, out=None)" -- which implies that dim and keepdim should be optional as well. What is the plan here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's true for dim, not keepdim=False I suppose since it does not accept None. I think what was done before is that python_arg_parser parse these two signature:
norm(Scalar p =2)
norm(Scalar?p, int64_t dim, bool keepdim=False
for different calls, and these two have slightly different behaviors. I do think that we could merge these two and I will try to do it in a follow up PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks pretty sketchy -- how does this work if p wasn't specified?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a backward function, since p will also be specified in the forward call (either a value passed by the caller or the default 2), the backward function will always get the p_ I suppose. To make sure we get it, I will add a value_or here to specific the default value as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the question I had was whether the above logic was right. I don't think it is because the user can explicitly pass a nullopt in C++.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that was sketchy actually, I updated it with the value_or semantic so that the case of p is not specified is correctly handled. Please review again when you have time :)

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good. Just a note in the future it's nicer if you do merges instead of force pushes because github doesn't really track review comments across force pushes.

@wanchaol
Copy link
Collaborator Author

sounds good, thanks for the suggestion!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Dec 20, 2018
Summary:
Optional clean up. This PR remove python_default_init from the yaml files, and the code-gen, and utilize optional type to do the work.

This also fix the bug in the #13149 to correctly adopt as_strided backward.

Fixes #9941
Pull Request resolved: pytorch/pytorch#15234

Differential Revision: D13502044

Pulled By: wanchaol

fbshipit-source-id: 774b61fc4414482cf11d56e22bd0275aefb352a4
zasdfgbnm added a commit to zasdfgbnm/pytorch that referenced this pull request Dec 21, 2018
@zasdfgbnm zasdfgbnm mentioned this pull request Jan 7, 2019
facebook-github-bot pushed a commit that referenced this pull request Jan 21, 2019
Summary:
This PR does three things:

~~Allow `int64_t?` in function schema,  which provide an elegant way of implementing null-able int arguments, as discussed in #15208 (review)

~~Originally implemented in #15235

~~Example:~~

```yaml
- func: myop(Tensor self, int64_t? dim=None) -> Tensor
  variants: function
```

~~cc: zou3519~~

Edit: implemented in #15234

Previously tried in #12064. There was a problem that C++ does not have kwarg support, which makes it confusing to know whether `unique(t, 1)` actually means `unique(t, dim=1)` or `unique(t, sorted=1)`.

Now I think I have a better idea on how to implement this: there are two ATen operators: `unique` and `unique_dim`. `unique` has the same signature as in python, and exported to both python and C++. `unique_dim` has signature `unique_dim(tensor, dim, sorted=False, return_inverse=False)`, and only exported to C++, which could be used more naturally for a C++ user.

Differential Revision: D13540278

Pulled By: wanchaol

fbshipit-source-id: 3768c76a90b0881f565a1f890459ebccbdfe6ecd
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jan 21, 2019
Summary:
This PR does three things:

~~Allow `int64_t?` in function schema,  which provide an elegant way of implementing null-able int arguments, as discussed in pytorch/pytorch#15208 (review)

~~Originally implemented in pytorch/pytorch#15235

~~Example:~~

```yaml
- func: myop(Tensor self, int64_t? dim=None) -> Tensor
  variants: function
```

~~cc: zou3519~~

Edit: implemented in pytorch/pytorch#15234

Previously tried in pytorch/pytorch#12064. There was a problem that C++ does not have kwarg support, which makes it confusing to know whether `unique(t, 1)` actually means `unique(t, dim=1)` or `unique(t, sorted=1)`.

Now I think I have a better idea on how to implement this: there are two ATen operators: `unique` and `unique_dim`. `unique` has the same signature as in python, and exported to both python and C++. `unique_dim` has signature `unique_dim(tensor, dim, sorted=False, return_inverse=False)`, and only exported to C++, which could be used more naturally for a C++ user.

Differential Revision: D13540278

Pulled By: wanchaol

fbshipit-source-id: 3768c76a90b0881f565a1f890459ebccbdfe6ecd
@ezyang ezyang added the merged label Jun 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

python_default_init behavior incorrect when auto generating aten functions

4 participants