[primTorch] Implement batch, group, and instance norm references#81191
[primTorch] Implement batch, group, and instance norm references#81191rdspring1 wants to merge 34 commits intopytorch:masterfrom
Conversation
🔗 Helpful links
❌ 5 New FailuresAs of commit 393c6c4 (more details on the Dr. CI page): Expand to see more
🕵️ 5 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
Use prim.squeeze which supports axis list argument
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/81191
Note: Links to docs will display an error until the docs builds have been completed. ❌ 7 Failures, 1 PendingAs of commit ac0598c: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| return flip(a, (0,)) | ||
|
|
||
|
|
||
| def _repeat_if_defined(a: Optional[Tensor], sizes: int): |
There was a problem hiding this comment.
Add a comment for this (private) function
| double momentum, double eps) { | ||
|
|
||
| using accscalar_t = at::acc_type<scalar_t, false>; | ||
| TORCH_CHECK(input.dim() >= 1, |
There was a problem hiding this comment.
Since batch_norm_cpu_update_stats_template is not a user-facing function this should be an internal assertion -- maybe the check could occur earlier?
| auto out = input.clone(); | ||
| if (weight.defined()) out = out * weight[0]; | ||
| if (bias.defined()) out = out + bias[0]; | ||
| if (weight.defined() && weight.numel() > 0) { |
There was a problem hiding this comment.
This will need a comment for what the semantics are when weight and/or bias have no elements
| auto num_features = self.sizes()[1]; | ||
| auto options = self.options().dtype( | ||
| at::toAccumulateType(self.scalar_type(), /*is_cuda=*/self.is_cuda())); | ||
| auto save_mean = at::empty({num_features}, options); |
There was a problem hiding this comment.
Is it really OK to return tensors with random values here?
This PR adds nvFuser's implementation for batch_norm as there's no reference yet (#81191) and no in-place copy support (#84545). Pull Request resolved: #85562 Approved by: https://github.com/kevinstephano, https://github.com/ngimel
|
/easycla As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details. This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign. |
|
|
| ((S, S, S), {'training': True, 'momentum': 0.5, 'eps': 0.6}), | ||
| ((3, 2, 4), {'training': False, 'momentum': -1.2}), | ||
| ((3, 1), {'training': True, 'momentum': 0.0}), | ||
| ((0,), {'training': True}), |
There was a problem hiding this comment.
Should these be added to instance and groupnorm sample inputs, too?
mruberry
left a comment
There was a problem hiding this comment.
Updates from offline review:
- let's separate the changes to existing operators and ensure those are tested properly
- also review #85960 and see if that should be part of the PR changing the ATen operations
- after the C++ changes to existing ATen operators, let's revisit the references
This PR adds nvFuser's implementation for batch_norm as there's no reference yet (pytorch/pytorch#81191) and no in-place copy support (pytorch/pytorch#84545). Pull Request resolved: pytorch/pytorch#85562 Approved by: https://github.com/kevinstephano, https://github.com/ngimel
This PR adds nvFuser's implementation for batch_norm as there's no reference yet (pytorch/pytorch#81191) and no in-place copy support (pytorch/pytorch#84545). Pull Request resolved: pytorch/pytorch#85562 Approved by: https://github.com/kevinstephano, https://github.com/ngimel
Add group norm reference Split from #81191 Pull Request resolved: #87054 Approved by: https://github.com/mruberry
Add group norm reference Split from pytorch#81191 Pull Request resolved: pytorch#87054 Approved by: https://github.com/mruberry
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Add References:
Depends on:
cc @ezyang @mruberry @ngimel @lezcano @fdrocha @peterbell10