Wrap unbiased flag in var, std, varall, stdall#2043
Wrap unbiased flag in var, std, varall, stdall#2043lantiga wants to merge 1 commit intopytorch:masterfrom
Conversation
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/lib/TH/generic/THTensorMath.c
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
I too don't have a strong preference for |
apaszke
left a comment
There was a problem hiding this comment.
Looks good now. Just one question regarding the tests
test/test_torch.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
If we keep the 'negate' flag, then we have to patch ATen as well otherwise it will be ignored, the options will be flipped. There is already an if_true: and if_false: in cwrap, can we use or modify those? |
|
@zdevito The issue with if_true is that it turns the generated variable ( |
|
Can cwrap be modified to not transform if_true/if_false, the few places where it is used modified to quote the strings, and then use if_true: false, if_false: true instead of negate? This would be easier to support in the ATen wrapper rather than another flag that modifies arguments. |
|
I'll look into it. |
|
@zdevito I extended BoolOption to avoid changing type to |
|
Build now fails because ATen interprets 0 and 1 emitted by cwrap as strings in BoolOptions. |
|
I squashed my commits and rebased onto master. |
|
First fix for @zdevito please let me know how should I go about this (I can open a PR in ATen or maybe you want to just go ahead). The second error is related to ambiguous code generation given the signature in which is ambiguous. This is a case where we have two keyword arguments of the same time in Python and either of them can be specified. It looks to me like a corner case in need of a general fix. |
|
@lantiga I think the ATen workflow we are going for is for PRs that are motivated by PyTorch changes is to do the same as for TH/THC etc. - make the changes in PyTorch and we will review them here, and then someone will forward to the ATen repo. We have had tried to work around having the type signature ambiguity in ATen - I can't remember exactly what we decided upon. |
|
@lantiga I fixed the bug that causes the ambiguous overload in ATen. It should filter them correctly now. |
|
I updated to the latest ATen on pytorch master, so has Zach's fixes. |
|
this is now merged into master. thanks Luca! |
|
Great @soumith, thanks! |
…8eb2ca (pytorch#21057) Summary: Pull Request resolved: pytorch#21057 Previous import was cc2333a3f929caca7223b98699237f19388dd585 Included changes: - **[90052912](onnx/onnx@90052912)**: Fix wrong condition and add --user in update_doc.sh (pytorch#2050) <daquexian> - **[a4f44a20](onnx/onnx@a4f44a20)**: Add bit-shift operators for supporting hashing (pytorch#1931) <Wei-Sheng Chin> - **[0098752c](onnx/onnx@0098752c)**: Add shape inference logic for Expand op (pytorch#2041) <Hariharan Seshadri> - **[fbe8addb](onnx/onnx@fbe8addb)**: update qops tests (pytorch#2040) <Ashwini Khade> - **[874fb37c](onnx/onnx@874fb37c)**: Fix torchvision installation (pytorch#2054) <bddppq> - **[1f5f6582](onnx/onnx@1f5f6582)**: Fix bug that kernel_shape rather than effective_kernel_shape is used in dilated conv (pytorch#2043) <daquexian> - **[38b6c44e](onnx/onnx@38b6c44e)**: Changes done internally at Facebook (pytorch#2035) <Lu Fang> - **[5c51f0db](onnx/onnx@5c51f0db)**: Explicitly specify type of integers in the input tensor. (pytorch#2034) <Dmitri Smirnov> Reviewed By: benoitsteiner Differential Revision: D15534241 fbshipit-source-id: 7f75707ea270e8fac0442431f7498ed47ba0cea2
Solves issue #1412
var,std,varallandstdallnow accept anunbiasedkeyword argument.As pointed out by @fmassa in the original issue,
THandTHChave aint flagargument that is0for computingvarwithout bias and1with bias. Since the Python andautogradAPI have anunbiasedflag with opposite semantics, I added a newcwrapplugin (NegatedArguments), that negates the value of the unpacked argument before assignment.