Skip to content

Simplify init._calculate_fan_in_and_fan_out#53522

Closed
JeanKossaifi wants to merge 7 commits intopytorch:masterfrom
JeanKossaifi:patch-1
Closed

Simplify init._calculate_fan_in_and_fan_out#53522
JeanKossaifi wants to merge 7 commits intopytorch:masterfrom
JeanKossaifi:patch-1

Conversation

@JeanKossaifi
Copy link
Copy Markdown
Contributor

@JeanKossaifi JeanKossaifi commented Mar 8, 2021

This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the init sub-module doesn't check for torch_function, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that.

Fixes #53540

This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the `init` sub-module doesn't check for `torch_function`, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that.
@facebook-github-bot
Copy link
Copy Markdown
Contributor

Hi @JeanKossaifi!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Mar 8, 2021

💊 CI failures summary and remediations

As of commit 333e3d6 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@jbschlosser
Copy link
Copy Markdown
Contributor

Hi @JeanKossaifi ! Is there an issue related to this PR?

@JeanKossaifi
Copy link
Copy Markdown
Contributor Author

JeanKossaifi commented Mar 8, 2021

Hi @jbschlosser ! No, there isn't, I directly made a PR, but I can also open one if you prefer.

@jbschlosser
Copy link
Copy Markdown
Contributor

Hi @jbschlosser ! No, there isn't, I directly made a PR, but I can also open one if you prefer.

Yes please :) Please also update the description for this PR to include Fixes #{issue number} once that's been done.

@JeanKossaifi
Copy link
Copy Markdown
Contributor Author

Thanks @jbschlosser, done: issue #53540.

i) math.prod is not always available
ii) reduce not supported by TorchScript
iii) such a simple case, a `prod` function is not needed
Copy link
Copy Markdown
Contributor

@jbschlosser jbschlosser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for tackling this! Couple tiny things below. Let's hope for no more unforeseen CI failures :)

Comment thread torch/nn/init.py
Comment thread torch/nn/init.py Outdated
@JeanKossaifi
Copy link
Copy Markdown
Contributor Author

Thanks @jbschlosser, I've made the changes. There is still failure for one of the tests but it seems unrelated.

@mrshenli mrshenli added module: nn Related to torch.nn triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Mar 10, 2021
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@jbschlosser jbschlosser self-requested a review March 10, 2021 17:08
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@jbschlosser merged this pull request in 70a4342.

xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
Summary:
This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the `init` sub-module doesn't check for `torch_function`, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that.

Fixes pytorch#53540

Pull Request resolved: pytorch#53522

Reviewed By: anjali411

Differential Revision: D26947794

Pulled By: jbschlosser

fbshipit-source-id: 80cd65efed16383f21363cee2eb404c9bc05971c
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
This uses the shape of the tensor instead of directly indexing it. This is useful when extending PyTorch's tensor class, e.g. for lazy access. Since the `init` sub-module doesn't check for `torch_function`, it is not possibly to override its functions. Explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements. Simply using the shape allows to avoid that.

Fixes pytorch#53540

Pull Request resolved: pytorch#53522

Reviewed By: anjali411

Differential Revision: D26947794

Pulled By: jbschlosser

fbshipit-source-id: 80cd65efed16383f21363cee2eb404c9bc05971c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: nn Related to torch.nn open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

init._calculate_fan_in_and_fan_out needlessly indexing the weight tensor instead of using its shape.

5 participants