Simplify init._calculate_fan_in_and_fan_out#53522
Simplify init._calculate_fan_in_and_fan_out#53522JeanKossaifi wants to merge 7 commits intopytorch:masterfrom
Conversation
This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the `init` sub-module doesn't check for `torch_function`, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that.
|
Hi @JeanKossaifi! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
💊 CI failures summary and remediationsAs of commit 333e3d6 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks! |
math.prod was only added in 3.8
|
Hi @JeanKossaifi ! Is there an issue related to this PR? |
|
Hi @jbschlosser ! No, there isn't, I directly made a PR, but I can also open one if you prefer. |
Yes please :) Please also update the description for this PR to include |
|
Thanks @jbschlosser, done: issue #53540. |
i) math.prod is not always available ii) reduce not supported by TorchScript iii) such a simple case, a `prod` function is not needed
jbschlosser
left a comment
There was a problem hiding this comment.
Thanks for tackling this! Couple tiny things below. Let's hope for no more unforeseen CI failures :)
|
Thanks @jbschlosser, I've made the changes. There is still failure for one of the tests but it seems unrelated. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@jbschlosser merged this pull request in 70a4342. |
Summary: This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the `init` sub-module doesn't check for `torch_function`, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that. Fixes pytorch#53540 Pull Request resolved: pytorch#53522 Reviewed By: anjali411 Differential Revision: D26947794 Pulled By: jbschlosser fbshipit-source-id: 80cd65efed16383f21363cee2eb404c9bc05971c
Summary: This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the `init` sub-module doesn't check for `torch_function`, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that. Fixes pytorch#53540 Pull Request resolved: pytorch#53522 Reviewed By: anjali411 Differential Revision: D26947794 Pulled By: jbschlosser fbshipit-source-id: 80cd65efed16383f21363cee2eb404c9bc05971c
This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the
initsub-module doesn't check fortorch_function, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that.Fixes #53540