Skip to content

[composite compliance] frobenius norm#78515

Closed
kshitij12345 wants to merge 8 commits intopytorch:masterfrom
kshitij12345:fix/composite-compliance/fro_norm
Closed

[composite compliance] frobenius norm#78515
kshitij12345 wants to merge 8 commits intopytorch:masterfrom
kshitij12345:fix/composite-compliance/fro_norm

Conversation

@kshitij12345
Copy link
Copy Markdown
Collaborator

@kshitij12345 kshitij12345 commented May 31, 2022

Ref: #69991

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented May 31, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 04b1541 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@kshitij12345 kshitij12345 marked this pull request as ready for review June 17, 2022 10:23
@kshitij12345 kshitij12345 requested review from zou3519 and removed request for IvanYashchuk, mruberry, ngimel and nikitaved June 17, 2022 10:23
Copy link
Copy Markdown
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is frobenius_norm_impl necessary? Why not implement the logic directly in frobenius_norm? That would fix the current inconsistency of doing a cast to the real numbers in the non-out version, but not doing it in the out version. Now, I don't even know why this cast is necessary in the first place, but that's a story for another day.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

kshitij12345 commented Jun 17, 2022

This is not about the cast but creating an empty({0}) and resizing it down the line.

Previously, out variant used to perform the computation frobenius_norm_impl which returns a new Tensor and copy it into resized out (Resizing a CCT is not valid).

frobenius_norm_impl pulls the common logic of actually computing the norm and returns a Tensor which

  • functional variant: returns after casting
  • out variant: copies into out (after resizing).

Feels like this PR needs more comments 😅

@lezcano
Copy link
Copy Markdown
Collaborator

lezcano commented Jun 17, 2022

I think the comment above still applies.

  1. I think that all the logic should just be in frobenius, and then frobenius_out should call into it. Otherwise you may have a divergence between the out-of-place and the out function, as we do now.
  2. I do not see why the cast to the real numbers or the point is necessary as the tensor by that point is already real, but then again, no need to fix this now.

@zou3519
Copy link
Copy Markdown
Contributor

zou3519 commented Jun 17, 2022

@lezcano, what is your proposal here? Is it to rewrite the code like the following?

Tensor frobenius_norm(const Tensor& self, IntArrayRef dim, bool keepdim) {
 ...
}

Tensor &frobenius_norm_out(const Tensor& self,
    IntArrayRef dim,
    bool keepdim,
    Tensor& result) {
  auto output = frobenius(self, dim, keepdim);
  result.resize_(output.sizes());
  return result.copy_(output);
}

@lezcano
Copy link
Copy Markdown
Collaborator

lezcano commented Jun 17, 2022

Yup! This is the pattern we use in many other composite operations.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@zou3519 @lezcano PTAL :)

Copy link
Copy Markdown
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I am not sure what the cast @lezcano mentioned is doing, but it is orthogonal and we should merge this first and file a separate issue for it

Copy link
Copy Markdown
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

kshitij12345 commented Jun 21, 2022

Requiring the cast was by-product of having implemented frobenius_norm_impl. With current approach that is not necessary. I don't think we need to file an issue for the same with the approach as suggested by @lezcano. We also have asserts to verify that we don't change the expected layout and output dtype.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@kshitij12345 your PR has been successfully merged.

@github-actions
Copy link
Copy Markdown
Contributor

Hey @kshitij12345.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Jun 22, 2022
Summary:
Ref: #69991

Pull Request resolved: #78515
Approved by: https://github.com/zou3519, https://github.com/Lezcano

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/018d071a4824ff1fa6d62a2b15c9cdfd01f626f7

Reviewed By: atalman

Differential Revision: D37333183

Pulled By: atalman

fbshipit-source-id: 5a55708172a300c84ee0b7c7b31e2a0c6b94ff4a
miladm pushed a commit to miladm/pytorch that referenced this pull request Jun 27, 2022
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants