Move resize logic for bmm from codegen to native code.#37955
Move resize logic for bmm from codegen to native code.#37955gchanan wants to merge 1 commit intogh/gchanan/267/basefrom
Conversation
[ghstack-poisoned]
💊 Build failures summary and remediationsAs of commit ad5df71 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following build failures do not appear to be due to upstream breakages:
|
| } | ||
|
|
||
| Tensor bmm_cuda(const Tensor& self, const Tensor& mat2) { | ||
| return legacy::cuda::_th_bmm(self, mat2); |
There was a problem hiding this comment.
curious, does this removal mean there's now a generated legacy::cuda::_th_bmm that nobody calls?
There was a problem hiding this comment.
yes, there's no way to control that separately. I've considered adding another annotation in order to reduce the amount of generated code, but haven't done it yet.
ghstack-source-id: e37d407 Pull Request resolved: pytorch/pytorch#37955
Summary: Pull Request resolved: pytorch#37955 Test Plan: Imported from OSS Differential Revision: D21433213 Pulled By: gchanan fbshipit-source-id: 421c566471279b53348bc77e738af13a1f3e1f9e
Stack from ghstack:
Differential Revision: D21433213