Skip to content

Move resize / zero logic for _thnn_conv_depthwise2d from codegen to native code.#37957

Closed
gchanan wants to merge 3 commits intogh/gchanan/269/basefrom
gh/gchanan/269/head
Closed

Move resize / zero logic for _thnn_conv_depthwise2d from codegen to native code.#37957
gchanan wants to merge 3 commits intogh/gchanan/269/basefrom
gh/gchanan/269/head

Conversation

@gchanan
Copy link
Copy Markdown
Contributor

@gchanan gchanan commented May 6, 2020

Stack from ghstack:

Differential Revision: D21433212

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 6, 2020

💊 CI failures summary and remediations

As of commit d376a6a (more details on the Dr. CI page):



🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

May 07 16:30:09 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
May 07 16:30:09 processing existing schema:  aten::_has_compatible_shallow_copy_type(Tensor self, Tensor from) -> (bool) 
May 07 16:30:09 processing existing schema:  aten::_unsafe_view(Tensor self, int[] size) -> (Tensor) 
May 07 16:30:09 processing existing schema:  aten::unsqueeze(Tensor(a) self, int dim) -> (Tensor(a)) 
May 07 16:30:09 processing existing schema:  aten::unsqueeze_(Tensor(a!) self, int dim) -> (Tensor(a!)) 
May 07 16:30:09 processing existing schema:  aten::vander(Tensor x, int? N=None, bool increasing=False) -> (Tensor) 
May 07 16:30:09 processing existing schema:  aten::var(Tensor self, bool unbiased=True) -> (Tensor) 
May 07 16:30:09 processing existing schema:  aten::var.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 
May 07 16:30:09 processing existing schema:  aten::var.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 07 16:30:09 processing existing schema:  aten::var.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 
May 07 16:30:09 processing existing schema:  aten::var.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 07 16:30:09 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
May 07 16:30:09  
May 07 16:30:09 Broken ops: [ 
May 07 16:30:09 	aten::quantized_instance_norm(Tensor input, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 
May 07 16:30:09 	quantized::group_norm(Tensor input, int num_groups, Tensor weight, Tensor bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 
May 07 16:30:09 	quantized::instance_norm(Tensor input, Tensor weight, Tensor bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 
May 07 16:30:09 	quantized::conv_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 
May 07 16:30:09 	aten::quantized_group_norm(Tensor input, int num_groups, Tensor? weight, Tensor? bias, float eps, float output_scale, int output_zero_point) -> (Tensor) 
May 07 16:30:09 ] 
May 07 16:30:09 + cleanup 
May 07 16:30:09 + retcode=1 

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


Extra GitHub checks: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 15 times.

Comment thread aten/src/ATen/nn_parse.py

# grad_weight and grad_bias need to be resized and zeroed
if arg['name'] == 'grad_weight' and base['name'] != '_thnn_conv2d':
if arg['name'] == 'grad_weight' and base['name'] != '_thnn_conv2d' and base['name'] != '_thnn_conv_depthwise2d':
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reupping earlier note - personally I'd rather have a list of names that's getting checked here, but not a blocker if you either like it better this way or just want to blast ahead

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, good suggestion. I knew ahead of time (from checking the generated code) that there were only two cases and I was going to remove them in this stack so it didn't seem worth it.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@gchanan merged this pull request in dc25190.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…ative code. (pytorch#37957)

Summary: Pull Request resolved: pytorch#37957

Test Plan: Imported from OSS

Differential Revision: D21433212

Pulled By: gchanan

fbshipit-source-id: fb431d5cf06afe2bb87fa2d73e15046f9a8d044d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants