Skip to content

Move resize logic for bmm from codegen to native code.#37955

Closed
gchanan wants to merge 1 commit intogh/gchanan/267/basefrom
gh/gchanan/267/head
Closed

Move resize logic for bmm from codegen to native code.#37955
gchanan wants to merge 1 commit intogh/gchanan/267/basefrom
gh/gchanan/267/head

Conversation

@gchanan
Copy link
Copy Markdown
Contributor

@gchanan gchanan commented May 6, 2020

Stack from ghstack:

Differential Revision: D21433213

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 6, 2020

💊 Build failures summary and remediations

As of commit ad5df71 (more details on the Dr. CI page):



🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

May 06 20:30:39 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
May 06 20:30:39 processing existing schema:  aten::var.out(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 06 20:30:39 processing existing schema:  aten::var.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor) 
May 06 20:30:39 processing existing schema:  aten::var.names_out(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!)) 
May 06 20:30:39 processing existing schema:  aten::var_mean(Tensor self, bool unbiased=True) -> (Tensor, Tensor) 
May 06 20:30:39 processing existing schema:  aten::var_mean.dim(Tensor self, int[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 
May 06 20:30:39 processing existing schema:  aten::var_mean.names_dim(Tensor self, str[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) 
May 06 20:30:39 processing existing schema:  aten::view_as(Tensor self, Tensor other) -> (Tensor) 
May 06 20:30:39 processing existing schema:  aten::where.self(Tensor condition, Tensor self, Tensor other) -> (Tensor) 
May 06 20:30:39 processing existing schema:  aten::where(Tensor condition) -> (Tensor[]) 
May 06 20:30:39 processing existing schema:  aten::_s_where(Tensor condition, Tensor self, Tensor other) -> (Tensor) 
May 06 20:30:39 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
May 06 20:30:39  
May 06 20:30:39 Broken ops: [ 
May 06 20:30:39 	quantized::conv_unpack(__torch__.torch.classes.quantized.Conv2dPackedParamsBase packed_weights) -> (Tensor unpacked_weights, Tensor? B_origin) 
May 06 20:30:39 ] 
May 06 20:30:39 =================== sccache compilation log =================== 
May 06 20:30:39 + cleanup 
May 06 20:30:39 + retcode=1 
May 06 20:30:39 + set +x 
May 06 20:30:39 =========== If your build fails, please take a look at the log above for possible reasons =========== 
May 06 20:30:39 Compile requests                 0 

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 2 times.

}

Tensor bmm_cuda(const Tensor& self, const Tensor& mat2) {
return legacy::cuda::_th_bmm(self, mat2);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious, does this removal mean there's now a generated legacy::cuda::_th_bmm that nobody calls?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, there's no way to control that separately. I've considered adding another annotation in order to reduce the amount of generated code, but haven't done it yet.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@gchanan merged this pull request in b2cc992.

xuezhou1998 pushed a commit to xuezhou1998/new_pytorch that referenced this pull request May 9, 2020
@facebook-github-bot facebook-github-bot deleted the gh/gchanan/267/head branch May 11, 2020 14:19
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#37955

Test Plan: Imported from OSS

Differential Revision: D21433213

Pulled By: gchanan

fbshipit-source-id: 421c566471279b53348bc77e738af13a1f3e1f9e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants