OpInfo for nn.functional.batch_norm#63218
OpInfo for nn.functional.batch_norm#63218krshrimali wants to merge 27 commits intopytorch:masterfrom
nn.functional.batch_norm#63218Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 84998cc (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
…ity/nn/functional/batch_norm
In that case, wouldn't |
I don't think that will also work since the functional variant doesn't accept an arg of ERROR: test_variant_consistency_eager_batch_norm_cpu_float32 (__main__.TestCommonCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/Documents/Projects/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/Documents/Projects/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 780, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/Documents/Projects/Quansight/pytorch/test/test_ops.py", line 468, in test_variant_consistency_eager
_test_consistency_helper(samples, variants)
File "/home/krshrimali/Documents/Projects/Quansight/pytorch/test/test_ops.py", line 457, in _test_consistency_helper
variant_forward = variant(cloned,
TypeError: batch_norm() got an unexpected keyword argument 'cudnn_enabled' |
| # RuntimeError: deepEquals(input.iValue, deepCopiedInput) INTERNAL ASSERT FAILED at | ||
| # "../torch/csrc/jit/passes/utils/check_alias_annotation.cpp":142, please report a bug to PyTorch | ||
| SkipInfo('TestJit', 'test_variant_consistency_jit'), |
There was a problem hiding this comment.
Unsure about this test failure.
cc: @eellison
IMO torch.batch_norm should not exist and we just forgot to exclude it and now we can't because of a deprecation cycle... but I wouldn't worry about the additional "cudnn_enabled" argument |
|
Have fixed the merge conflicts, @zou3519 - could you please take a look whenever you find the time? Thanks! :) |
yes, will do! Sorry for the delayed response. |
…b.com:krshrimali/pytorch into opinfo/high_priority/nn/functional/batch_norm
…ity/nn/functional/batch_norm
zou3519
left a comment
There was a problem hiding this comment.
This looks great! I added some minor suggestions for cases, please let me know what you think
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
Test failures are unrelated; Attempting to merge |
Addresses pytorch/functorch#78 and #54261.
torch.batch_normbut it takes an extra arg:cudnn_enabled(not there in functional variant). This is passed from the functional variant totorch.batch_normhere: https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py#L2282.test_variant_consistency_jitfails with an error: (when passed an alias)cc: @mruberry @zou3519 @kshitij12345