Skip to content

Wrap unbiased flag in var, std, varall, stdall#2043

Closed
lantiga wants to merge 1 commit intopytorch:masterfrom
lantiga:unbiased_var
Closed

Wrap unbiased flag in var, std, varall, stdall#2043
lantiga wants to merge 1 commit intopytorch:masterfrom
lantiga:unbiased_var

Conversation

@lantiga
Copy link
Contributor

@lantiga lantiga commented Jul 10, 2017

Solves issue #1412

var, std, varall and stdall now accept an unbiased keyword argument.

As pointed out by @fmassa in the original issue, TH and THC have a int flag argument that is 0 for computing var without bias and 1 with bias. Since the Python and autograd API have an unbiased flag with opposite semantics, I added a new cwrap plugin (NegatedArguments), that negates the value of the unpacked argument before assignment.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it wouldn't be simpler to use before_call instead of adding a new cwrap plugin, but I'm fine with either way.

cc @zdevito - won't this break ATen somehow (it changes the tensor interface)?

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@lantiga
Copy link
Contributor Author

lantiga commented Jul 11, 2017

I too don't have a strong preference for NegatedArguments vs before_call. I figured it could be something of general use, but if it's really only used here then it may be overkill. Your call, really.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now. Just one question regarding the tests

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@zdevito
Copy link
Contributor

zdevito commented Jul 11, 2017

If we keep the 'negate' flag, then we have to patch ATen as well otherwise it will be ignored, the options will be flipped. There is already an if_true: and if_false: in cwrap, can we use or modify those?

@lantiga
Copy link
Contributor Author

lantiga commented Jul 13, 2017

@zdevito The issue with if_true is that it turns the generated variable (arg_unbiased) into a const char*.

@zdevito
Copy link
Contributor

zdevito commented Jul 13, 2017

Can cwrap be modified to not transform if_true/if_false, the few places where it is used modified to quote the strings, and then use if_true: false, if_false: true instead of negate? This would be easier to support in the ATen wrapper rather than another flag that modifies arguments.

@lantiga
Copy link
Contributor Author

lantiga commented Jul 13, 2017

I'll look into it.

@lantiga
Copy link
Contributor Author

lantiga commented Jul 13, 2017

@zdevito I extended BoolOption to avoid changing type to const char* unless if_true is a string and removed negated.

@lantiga
Copy link
Contributor Author

lantiga commented Jul 13, 2017

Build now fails because ATen interprets 0 and 1 emitted by cwrap as strings in BoolOptions.
@zdevito would you please look into the ATen side of it? Thx!

@lantiga
Copy link
Contributor Author

lantiga commented Jul 13, 2017

I squashed my commits and rebased onto master.
I can now replicate the ATen issues (tests were passing on my branch).

@lantiga
Copy link
Contributor Author

lantiga commented Jul 13, 2017

First fix for if_true/if_false for ATen:

--- a/torch/lib/ATen/function_wrapper.py
+++ b/torch/lib/ATen/function_wrapper.py
@@ -1,6 +1,12 @@
 import re
 from code_template import CodeTemplate
 
+import sys
+if sys.version_info[0] == 3:
+    string_type = str
+else:
+    string_type = basestring
+
@@ -276,8 +282,12 @@ def create_derived(backend_type_env, declarations):
         if requires_checked_cast(argument):
             return CHECKED_USE.get(argument['type'], '{}_').format(argument['name'])
         elif argument['type'] == 'bool' and 'if_true' in argument:
-            return '({}) ? "{}" : "{}"'.format(argument['name'],
-                                               argument['if_true'], argument['if_false'])
+            if isinstance(argument['if_true'], string_type):
+                tpl = '({}) ? "{}" : "{}"'
+            else:
+                tpl = '({}) ? {} : {}'
+            return tpl.format(argument['name'],
+                       argument['if_true'], argument['if_false'])

@zdevito please let me know how should I go about this (I can open a PR in ATen or maybe you want to just go ahead).

The second error is related to ambiguous code generation given the signature in cwrap. In pytorch/torch/lib/build/ATen/ATen/Type.h, at lines 392 and 394 we get:

virtual Tensor & var_out(const Tensor & self, int64_t dim, bool unbiased, Tensor & destination) ;
virtual Tensor & var_out(const Tensor & self, int64_t dim, bool keepdim, Tensor & destination) ;

which is ambiguous. This is a case where we have two keyword arguments of the same time in Python and either of them can be specified. It looks to me like a corner case in need of a general fix.
What do you think?

@killeent
Copy link
Contributor

@lantiga I think the ATen workflow we are going for is for PRs that are motivated by PyTorch changes is to do the same as for TH/THC etc. - make the changes in PyTorch and we will review them here, and then someone will forward to the ATen repo.

We have had tried to work around having the type signature ambiguity in ATen - I can't remember exactly what we decided upon.

@zdevito
Copy link
Contributor

zdevito commented Jul 13, 2017

@lantiga I fixed the bug that causes the ambiguous overload in ATen. It should filter them correctly now.

@soumith
Copy link
Collaborator

soumith commented Jul 14, 2017

I updated to the latest ATen on pytorch master, so has Zach's fixes.

@lantiga
Copy link
Contributor Author

lantiga commented Jul 14, 2017

Thanks @killeent, @zdevito and @soumith! I've pushed my fix to ATen, tests pass now.

@soumith
Copy link
Collaborator

soumith commented Jul 14, 2017

this is now merged into master. thanks Luca!

@soumith soumith closed this Jul 14, 2017
@lantiga
Copy link
Contributor Author

lantiga commented Jul 15, 2017

Great @soumith, thanks!

houseroad added a commit to houseroad/pytorch that referenced this pull request May 30, 2019
…8eb2ca (pytorch#21057)

Summary:
Pull Request resolved: pytorch#21057

Previous import was cc2333a3f929caca7223b98699237f19388dd585

Included changes:
- **[90052912](onnx/onnx@90052912)**: Fix wrong condition and add --user in update_doc.sh (pytorch#2050) <daquexian>
- **[a4f44a20](onnx/onnx@a4f44a20)**: Add bit-shift operators for supporting hashing (pytorch#1931) <Wei-Sheng Chin>
- **[0098752c](onnx/onnx@0098752c)**: Add shape inference logic for Expand op (pytorch#2041) <Hariharan Seshadri>
- **[fbe8addb](onnx/onnx@fbe8addb)**: update qops tests (pytorch#2040) <Ashwini Khade>
- **[874fb37c](onnx/onnx@874fb37c)**: Fix torchvision installation (pytorch#2054) <bddppq>
- **[1f5f6582](onnx/onnx@1f5f6582)**: Fix bug that kernel_shape rather than effective_kernel_shape is used in dilated conv (pytorch#2043) <daquexian>
- **[38b6c44e](onnx/onnx@38b6c44e)**: Changes done internally at Facebook (pytorch#2035) <Lu Fang>
- **[5c51f0db](onnx/onnx@5c51f0db)**: Explicitly specify type of integers in the input tensor. (pytorch#2034) <Dmitri Smirnov>

Reviewed By: benoitsteiner

Differential Revision: D15534241

fbshipit-source-id: 7f75707ea270e8fac0442431f7498ed47ba0cea2
IvanYashchuk pushed a commit to IvanYashchuk/pytorch that referenced this pull request Oct 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants