Remove (almost all) TensorOptions from native_functions.yaml#17385
Remove (almost all) TensorOptions from native_functions.yaml#17385cpuhrsch wants to merge 24 commits intopytorch:masterfrom
Conversation
cc63fd9 to
325fd73
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Note: It's debatable whether we want to add support for |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
b1680cc to
26e6924
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
This is the changeset of the generated code. |
There was a problem hiding this comment.
These defaults are wrong:
dtype: the default is the default tensor type
layout: strided is correct for some of these, but wrong for others (e.g. sparse_coo_tensor below has default strided which is clearly wrong)
device: the default device is the device of the default tensor type.
Is there a reason we can't make the defaults for all of these None if they are determined at runtime? (It's fine to have a real default for layout but it should be correct).
There was a problem hiding this comment.
These defaults are determined based on what the JIT generates. They are indeed not logical, but that doesn't mean they're wrong in the way it's currently implemented.
There was a problem hiding this comment.
Marking this for further edits: Shouldn't have defaults
There was a problem hiding this comment.
these types and defaults look not-pythonic. You don't have to change this now, but what's the long term plan?
i.e. ScalarType -> Dtype(?), in python this is "torch.dtype", but "dtype dtype" looks a bit strange. Maybe we should just write out the full types? e.g:
func: bartlett_window(int window_length, *, torch.dtype dtype=None, torch.layout layout=torch.strided, torch.device device=None) -> Tensor
Although then I guess you should write out torch.Tensor which seems annoying.
Thoughts? CC @zdevito.
There was a problem hiding this comment.
Keep in mind, this also needs be used in the JIT.
There was a problem hiding this comment.
this look wrong -- doesn't this mean you have to provide values for dtype, layout, device?
There was a problem hiding this comment.
Even more so, how can a keyword argument "options" not require a default value?
There was a problem hiding this comment.
If you do provide this a default value, it would, logically, make sense to provide one for the generated C++ code, however, you'll then get conflicts with empty_like(Tensor self) and empty_like(Tensor self, TensorOptions options={}), because it's ambiguous which symbol to use.
There was a problem hiding this comment.
ah, ok, the issue is this isn't combined with the above definition. Do you know why that is?
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
2d6021f to
8bfda98
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
7c7af73 to
c850bc0
Compare
There was a problem hiding this comment.
Still needs work / can be removed?
There was a problem hiding this comment.
yeah, this seems like an improvement -- previously we had to mess with the dtype to get this to work right.
There was a problem hiding this comment.
Yes, but this appears to be restricted to tracing
| # device is specified as an IntArrayRef of { at::Device::Type, device_id } | ||
| {'name': 'device', 'type': ['Device', 'Device?'], 'is_nullable': False, 'annotation': None}, | ||
| def copy_arguments(arguments): | ||
| new_arguments = [] |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Stacked on top of pytorch/pytorch#17386 Brings us to 1014/1106 of writing. Pull Request resolved: pytorch/pytorch#17385 Differential Revision: D14248008 Pulled By: cpuhrsch fbshipit-source-id: 033e00de91e3edf7ae01ca03ebe436c0446b3b5c
* upstream/master: (87 commits) Make Variable::set_data non-const; cosmetic fixes. remove warning for upsample code (pytorch#17921) Optimize TileOp (pytorch#17290) Optimize channel_stats_op (pytorch#16243) enable shape inference for elementwise operators (pytorch#17885) Remove remaining test jit expects redux (pytorch#17924) Handle Scalars Better (pytorch#17875) Fixed a formatting issue in doc comments (pytorch#17505) Add nbytes, itemsize, element_size to at::Tensor. (pytorch#17810) Fix lint in test_distributions.py Fix lint in test_jit.py Fix lint errors in test_autograd Added a few extra python bindings to help with walking the IR graph from Python (pytorch#17822) kthvalue consistency with sort in the presence of NaN (pytorch#17824) Fix minor grammatical mistakes in torch/nn/modules/loss.py (pytorch#17892) Remove (almost all) TensorOptions from native_functions.yaml (pytorch#17385) Restore full Windows tests (pytorch#17102) Prevent VS2017 from emitting ambiguous symbol errors (second time) Fix windows test hang (pytorch#17778) torch.btrifact for tensors with greater than 3 dimensions (pytorch#14964) ...
…#17385) Summary: Stacked on top of pytorch#17386 Brings us to 1014/1106 of writing. Pull Request resolved: pytorch#17385 Differential Revision: D14248008 Pulled By: cpuhrsch fbshipit-source-id: 033e00de91e3edf7ae01ca03ebe436c0446b3b5c
Stacked on top of #17386
Brings us to 1014/1106 of writing.