Skip to content

hardswish: add backards pass test#36420

Closed
vkuzo wants to merge 3 commits intogh/vkuzo/29/basefrom
gh/vkuzo/29/head
Closed

hardswish: add backards pass test#36420
vkuzo wants to merge 3 commits intogh/vkuzo/29/basefrom
gh/vkuzo/29/head

Conversation

@vkuzo
Copy link
Copy Markdown
Contributor

@vkuzo vkuzo commented Apr 11, 2020

Stack from ghstack:

Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D20994100

Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 11, 2020
Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: bacb78c
Pull Request resolved: #36420
@vkuzo vkuzo requested a review from raghuramank100 April 11, 2020 00:53
@vkuzo vkuzo self-assigned this Apr 11, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 11, 2020

💊 Build failures summary and remediations

As of commit 6c033d5 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_xenial_py3_6_clang7_build (1/2)

Step: "Build" (full log | pattern match details | 🔁 rerun)

Apr 14 16:42:32 Failed to generate ATEN bindings: ['/var/lib/jenkins/workspace/xla/scripts/generate_code.sh']
Apr 14 16:42:32 AtenXlaType function missed override: Tensor normal(const Tensor& mean, const Tensor& std, c10::optional<Generator> generator); // normal(Tensor,Tensor,c10::optional<Generator>)->Tensor 
Apr 14 16:42:32 AtenXlaType function missed override: Tensor& normal_(Tensor& self, double mean, double std, c10::optional<Generator> generator); // normal_(Tensor,double,double,c10::optional<Generator>)->Tensor 
Apr 14 16:42:32 AtenXlaType function missed override: Tensor rrelu_with_noise(const Tensor& self, const Tensor& noise, Scalar lower, Scalar upper, bool training, c10::optional<Generator> generator); // rrelu_with_noise(Tensor,Tensor,Scalar,Scalar,bool,c10::optional<Generator>)->Tensor 
Apr 14 16:42:32 AtenXlaType function missed override: Tensor& uniform_(Tensor& self, double from, double to, c10::optional<Generator> generator); // uniform_(Tensor,double,double,c10::optional<Generator>)->Tensor 
Apr 14 16:42:32 Traceback (most recent call last): 
Apr 14 16:42:32   File "/var/lib/jenkins/workspace/xla/scripts/gen.py", line 1083, in <module> 
Apr 14 16:42:32     generate(args) 
Apr 14 16:42:32   File "/var/lib/jenkins/workspace/xla/scripts/gen.py", line 1053, in generate 
Apr 14 16:42:32     assert check_overrides(overrides, overridden) 
Apr 14 16:42:32 AssertionError 
Apr 14 16:42:32 Failed to generate ATEN bindings: ['/var/lib/jenkins/workspace/xla/scripts/generate_code.sh'] 
Apr 14 16:42:32 Building torch_xla version: 1.6 
Apr 14 16:42:32 =================== sccache compilation log =================== 
Apr 14 16:42:32 + cleanup 
Apr 14 16:42:32 + retcode=1 
Apr 14 16:42:32 + set +x 
Apr 14 16:42:32 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Apr 14 16:42:32 Compile requests              5472 
Apr 14 16:42:32 Compile requests executed     3296 
Apr 14 16:42:32 Cache hits                    3284 
Apr 14 16:42:32 Cache misses                     0 

See CircleCI build pytorch_linux_backward_compatibility_check_test (2/2)

Step: "Test" (full log | pattern match details | 🔁 rerun)

Apr 14 16:56:55 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Apr 14 16:56:55 processing existing schema:  aten::can_cast(int from, int to) -> (bool) 
Apr 14 16:56:55 processing existing schema:  aten::_thnn_fused_lstm_cell(Tensor input_gates, Tensor hidden_gates, Tensor cx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::_thnn_fused_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::_thnn_differentiable_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor input_gates, Tensor hidden_gates, Tensor? input_bias, Tensor? hidden_bias, Tensor cx, Tensor cy) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::_thnn_fused_gru_cell(Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::_thnn_fused_gru_cell_backward(Tensor grad_hy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::rnn_tanh.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::rnn_tanh.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::rnn_relu.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 
Apr 14 16:56:55 processing existing schema:  aten::rnn_relu.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 
Apr 14 16:56:55 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Apr 14 16:56:55  
Apr 14 16:56:55 Broken ops: [ 
Apr 14 16:56:55 	aten::sizes(Tensor self) -> (int) 
Apr 14 16:56:55 	aten::strides(Tensor self) -> (int) 
Apr 14 16:56:55 ] 
Apr 14 16:56:55 + cleanup 
Apr 14 16:56:55 + retcode=1 
Apr 14 16:56:55 + set +x 
Apr 14 16:56:55 =================== sccache compilation log =================== 
Apr 14 16:56:55 =========== If your build fails, please take a look at the log above for possible reasons =========== 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 10 times.

Comment thread test/test_nn.py
def test_hardswish_grad(self, device):
inputs = (torch.randn(4, 16, 16, device=device) - 0.5) * 10
inputs.requires_grad = True
self.assertTrue(gradcheck(F.hardswish, (inputs,)))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work? Where is the reference output provided here?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a nice util function, it checks that the gradient computed analytically by perturbing the input matches the gradient defined by the backwards pass. It's used to test other backwards passes in this file.

def gradcheck(func, inputs, eps=1e-6, atol=1e-5, rtol=1e-3, raise_exception=True, check_sparse_nnz=False, nondet_tol=0.0):
has more context

Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D20994100](https://our.internmc.facebook.com/intern/diff/D20994100)

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 14, 2020
Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 3228c04
Pull Request resolved: #36420
Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D20994100](https://our.internmc.facebook.com/intern/diff/D20994100)

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Apr 14, 2020
Summary:

Adds a unit test for hardswish backward pass

Test Plan:

Unit test passes on cpu and cuda

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 841bae9
Pull Request resolved: #36420
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 3c8921b.

@facebook-github-bot facebook-github-bot deleted the gh/vkuzo/29/head branch April 19, 2020 14:17
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#36420

Adds a unit test for hardswish backward pass

Test Plan:
Unit test passes on cpu and cuda

Imported from OSS

Differential Revision: D20994100

fbshipit-source-id: 579df709cc2d92fce3b9a0eeb6faeb9fe8d2f641
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants