Call grad_mode.py context managers as decorators#7737
Conversation
| return False | ||
|
|
||
| def __call__(self, func): | ||
| def decorate_no_grad(*args, **kwargs): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_autograd.py
Outdated
| return x * 2 | ||
|
|
||
| y = doubler_with(x) | ||
| self.assertTrue(y.requires_grad) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
|
@pytorchbot test this please |
|
test failure looks legit: |
|
ah, indeed. going to have to change up the set_grad_enabled class. nice catch |
|
so, question. in order to maintain the current behavior, e.g. to use torch.set_grad_enabled(False) imperatively, I need to be able to save the input to set_grad_enabled as an attribute, then change enter to set the grad mode to that attribute's value: however, if I do that, then any time it's instantiated, including when it wraps a function, the underlying grad mode will be changed. this seems like unwanted behavior, and maybe it's best to not use this one as a decorator? open to other suggestions. |
|
I think it's ok to forbid using |
* origin: [Caffe2] Enabling AMD GPU Backend for Caffe2 (pytorch#7566) Call grad_mode.py context managers as decorators (pytorch#7737) catch CPU tensors in checkSameGPU (fixes pytorch#7689) (pytorch#7767) Mark stack as non-executable in NNPACK (pytorch#7752) small fixes in fusion_compiler (pytorch#7776) Run clang-format on c10d (pytorch#7791)
* call grad_mode.py context managers as decorators * flake fixes * switch to using context manager in wrapper * fix set_grad_enabled test * removed dumb github UI whitespace * revert set_grad_enabled to normal, update tests
Extends the context manager classes
torch.no_grad,torch.enable_grad, andtorch.set_grad_modeto function as decorators, so that users can wrap functions that will never require a call to.backward()downstream. I've modified the docs to reflect this change, and I've also added tests for the new functionality in each mode's respective test intest_autograd.py.I also didn't find a unit test specifically for
torch.enable_grad. Assuming that's intended, unless my ctrl+f missed it.