Conversation
🔗 Helpful links
❌ 1 New FailuresAs of commit 0516660 (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
|
|
||
| msg = f"Reference result was farther ({ref_distance}) from the precise \ | ||
| computation than the torch result was ({torch_distance})!" | ||
| self.assertTrue(ref_distance <= torch_distance, msg=msg) |
There was a problem hiding this comment.
I don't think ref_distance is always weakly less than torch_distance, it's ok to be larger with some tolerance?
There was a problem hiding this comment.
I was also thinking about this but wasn't happy with any immediate ideas I had -- cool if I add a TODO comment?
test/test_ops.py
Outdated
| # Reports numerical accuracy discrepancies | ||
| if ex is not None: | ||
| msg = "Test passed because the reference was more accurate than the torch operator." | ||
| print(msg) |
There was a problem hiding this comment.
should this be a warning? Pytest hides stdout of passing tests, so it will be hard to see for people using pytest
There was a problem hiding this comment.
Sure -- warning it is
| scalar_tensor = None | ||
| number = None | ||
| for arg in args: | ||
| for arg in args_: |
There was a problem hiding this comment.
this would potentially set tensor to something with wrong dtype (from args_with_different_types)
There was a problem hiding this comment.
Great catch - fixed
torch/_prims/__init__.py
Outdated
| *args, type_promotion: ELEMENTWISE_PRIM_TYPE_PROMOTION_KIND | ||
| *args, | ||
| type_promotion: ELEMENTWISE_PRIM_TYPE_PROMOTION_KIND, | ||
| args_with_different_dtypes: Tuple[TensorLikeType, ...] = None, |
There was a problem hiding this comment.
Yeah that's way better -- fixed
torch/_prims/__init__.py
Outdated
|
|
||
|
|
||
| def _select_aten(pred: Tensor, a: Tensor, b: Tensor) -> Tensor: | ||
| def _where_aten(pred: Tensor, a: Tensor, b: Tensor) -> Tensor: |
There was a problem hiding this comment.
out of curiosity, why do we need this helper, and not just use torch.where in make_prim?
There was a problem hiding this comment.
Great point -- I was just on automatic mode -- fixed!
torch/_prims/__init__.py
Outdated
|
|
||
| def _empty_like_aten( | ||
| a: Tensor, *, dtype: torch.dtype, device: torch.device, requires_grad: bool | ||
| def _empty_strided_aten( |
| else: | ||
| value = 3 | ||
|
|
||
| return ({'value': value}, {'value': value}) |
There was a problem hiding this comment.
This is super weird and we could sugar over this, but it's because we sometimes pass different arguments to the NumPy op, so we have "torch kwargs" and "NumPy kwargs" here
|
@pytorchbot merge this please |
|
Hey @mruberry. |
|
@pytorchbot revert -m "This broke trunk: https://hud.pytorch.org/pytorch/pytorch/commit/043cf1f9c746b4dda2c404ba6c76c6ccad5e2cbe" -c landrace |
|
Actually this looks like the proper classification is "nosignal"--only slow tests broke. |
This reverts commit 043cf1f. Reverted #78026 on behalf of https://github.com/suo due to This broke trunk: https://hud.pytorch.org/pytorch/pytorch/commit/043cf1f9c746b4dda2c404ba6c76c6ccad5e2cbe
|
@pytorchbot merge this please |
|
Hey @mruberry. |
| self = torch.clamp(self, lo, hi) | ||
| return (self / (1 - self)).log() | ||
| self = refs.clamp(self, lo, hi) | ||
| return refs.log(refs.true_divide(self, refs.sub(1, self))) |
There was a problem hiding this comment.
@mruberry Given that the context manager exists now, we should prefer using the torch API calls as this ensures that the decomposition in question is using the limited API supported by torch and not the expanded API from refs. Is this just to work around the local problem that ref consistency tests don't work? I'd much rather we dupe the tests in that case.
There was a problem hiding this comment.
It's that the meta tests weren't working; I did duplicate the consistency tests
Summary: This PR... **Issues Found** - #78058 - #78054 - #78053 - #78050 - #77932 **Testing** - disables stride consistency checks in test_ops and test_meta pending resolution of #78050 - skips chalf in reference tests (addressing #78054) - splits test test_python_reference_consistency in one test for the ctx where torch.foo is torch.foo, and another for when torch.foo is refs.foo - updates test names to be more natural and consistent: - test_python_reference_errors -> test_python_ref_errors - test_python_reference_consistency -> test_python_ref and test_python_ref_torch_fallback - test_python_reference_meta_functions -> test_python_ref_meta - test_reference_testing -> test_numpy_ref - updates test_python_ref and test_python_ref_torch_fallback to check that the reference is more accurate than the torch op if the reference and torch op results are not close, a warning is raised when this occurs (addressing #77687) - adds reference inputs for broadcast_tensors - Updates the "fill_" OpInfo to "fill", adding a NumPy reference and making it an elementwise unary operator - Adds 1D no element sample inputs to the cat OpInfo and updates the NumPy reference to handle them and type promotion correctly - Adds reference inputs for elementwise ternary operations, like clamp - Adds a NumPy reference for clamp - Adds reference inputs to where's OpInfo - Makes softplus an elementwise unary OpInfo - Removes the great majority of Python reference OpInfo skips and xfails due to the above test changes - Adds Python reference OpInfos for fill, dropout, clamp, broadcast_tensors, and where **Prims** - adds the fill, empty_strided, and uniform prims - removes the empty, empty_like, full, and full_like prims -- these are now references that use empty_strided and fill - renames the "concatenate" and "select" prims to "cat" and "where", respectively, to be consistent with PyTorch - extends the `_elementwise_meta` operation to accepts tensors that don't participate in type promotion, like the `cond` tensor in `where` - fixes a bug in the stride propagation of broadcast_in_dim - moves some error checks from prims.cat to prims.where to refs.cat and refs.where, respectively, consistent with our new policy of doing as much error checking in the ref as possible **Utils** - adds the canoicalize_device, extract_shape, and extract_shape_from_varargs helpers - adds the elementwise_unary_scalar_wrapper -- this allows elementwise unary operators to take and return scalar values (ex. refs.sin(1) will return .84...) **Refs** - adds the fill, broadcast_tensors, clamp, empty_strided, ones, zeros, and uniform references - adds the nn.functional.dropout reference - fixes refs.cat to handle 1D tensors with no inputs consistent with eager mode Pull Request resolved: #78026 Approved by: https://github.com/ngimel Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d4345ed0a6c06b1e489e41c219f94d26d3014ce6 Reviewed By: seemethere Differential Revision: D36610393 Pulled By: mruberry fbshipit-source-id: 415e532ab647ab8425f9064796704f6c44115f0e
This PR... **Issues Found** - #78058 - #78054 - #78053 - #78050 - #77932 **Testing** - disables stride consistency checks in test_ops and test_meta pending resolution of #78050 - skips chalf in reference tests (addressing #78054) - splits test test_python_reference_consistency in one test for the ctx where torch.foo is torch.foo, and another for when torch.foo is refs.foo - updates test names to be more natural and consistent: - test_python_reference_errors -> test_python_ref_errors - test_python_reference_consistency -> test_python_ref and test_python_ref_torch_fallback - test_python_reference_meta_functions -> test_python_ref_meta - test_reference_testing -> test_numpy_ref - updates test_python_ref and test_python_ref_torch_fallback to check that the reference is more accurate than the torch op if the reference and torch op results are not close, a warning is raised when this occurs (addressing #77687) - adds reference inputs for broadcast_tensors - Updates the "fill_" OpInfo to "fill", adding a NumPy reference and making it an elementwise unary operator - Adds 1D no element sample inputs to the cat OpInfo and updates the NumPy reference to handle them and type promotion correctly - Adds reference inputs for elementwise ternary operations, like clamp - Adds a NumPy reference for clamp - Adds reference inputs to where's OpInfo - Makes softplus an elementwise unary OpInfo - Removes the great majority of Python reference OpInfo skips and xfails due to the above test changes - Adds Python reference OpInfos for fill, dropout, clamp, broadcast_tensors, and where **Prims** - adds the fill, empty_strided, and uniform prims - removes the empty, empty_like, full, and full_like prims -- these are now references that use empty_strided and fill - renames the "concatenate" and "select" prims to "cat" and "where", respectively, to be consistent with PyTorch - extends the `_elementwise_meta` operation to accepts tensors that don't participate in type promotion, like the `cond` tensor in `where` - fixes a bug in the stride propagation of broadcast_in_dim - moves some error checks from prims.cat to prims.where to refs.cat and refs.where, respectively, consistent with our new policy of doing as much error checking in the ref as possible **Utils** - adds the canoicalize_device, extract_shape, and extract_shape_from_varargs helpers - adds the elementwise_unary_scalar_wrapper -- this allows elementwise unary operators to take and return scalar values (ex. refs.sin(1) will return .84...) **Refs** - adds the fill, broadcast_tensors, clamp, empty_strided, ones, zeros, and uniform references - adds the nn.functional.dropout reference - fixes refs.cat to handle 1D tensors with no inputs consistent with eager mode Pull Request resolved: #78026 Approved by: https://github.com/ngimel
This reverts commit 043cf1f. Reverted #78026 on behalf of https://github.com/suo due to This broke trunk: https://hud.pytorch.org/pytorch/pytorch/commit/043cf1f9c746b4dda2c404ba6c76c6ccad5e2cbe
This PR... **Issues Found** - #78058 - #78054 - #78053 - #78050 - #77932 **Testing** - disables stride consistency checks in test_ops and test_meta pending resolution of #78050 - skips chalf in reference tests (addressing #78054) - splits test test_python_reference_consistency in one test for the ctx where torch.foo is torch.foo, and another for when torch.foo is refs.foo - updates test names to be more natural and consistent: - test_python_reference_errors -> test_python_ref_errors - test_python_reference_consistency -> test_python_ref and test_python_ref_torch_fallback - test_python_reference_meta_functions -> test_python_ref_meta - test_reference_testing -> test_numpy_ref - updates test_python_ref and test_python_ref_torch_fallback to check that the reference is more accurate than the torch op if the reference and torch op results are not close, a warning is raised when this occurs (addressing #77687) - adds reference inputs for broadcast_tensors - Updates the "fill_" OpInfo to "fill", adding a NumPy reference and making it an elementwise unary operator - Adds 1D no element sample inputs to the cat OpInfo and updates the NumPy reference to handle them and type promotion correctly - Adds reference inputs for elementwise ternary operations, like clamp - Adds a NumPy reference for clamp - Adds reference inputs to where's OpInfo - Makes softplus an elementwise unary OpInfo - Removes the great majority of Python reference OpInfo skips and xfails due to the above test changes - Adds Python reference OpInfos for fill, dropout, clamp, broadcast_tensors, and where **Prims** - adds the fill, empty_strided, and uniform prims - removes the empty, empty_like, full, and full_like prims -- these are now references that use empty_strided and fill - renames the "concatenate" and "select" prims to "cat" and "where", respectively, to be consistent with PyTorch - extends the `_elementwise_meta` operation to accepts tensors that don't participate in type promotion, like the `cond` tensor in `where` - fixes a bug in the stride propagation of broadcast_in_dim - moves some error checks from prims.cat to prims.where to refs.cat and refs.where, respectively, consistent with our new policy of doing as much error checking in the ref as possible **Utils** - adds the canoicalize_device, extract_shape, and extract_shape_from_varargs helpers - adds the elementwise_unary_scalar_wrapper -- this allows elementwise unary operators to take and return scalar values (ex. refs.sin(1) will return .84...) **Refs** - adds the fill, broadcast_tensors, clamp, empty_strided, ones, zeros, and uniform references - adds the nn.functional.dropout reference - fixes refs.cat to handle 1D tensors with no inputs consistent with eager mode Pull Request resolved: #78026 Approved by: https://github.com/ngimel
Ref: #69991 Probably started working since : #78026 Pull Request resolved: #80277 Approved by: https://github.com/zou3519
Summary: Ref: #69991 Probably started working since : #78026 Pull Request resolved: #80277 Approved by: https://github.com/zou3519 Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/1b18c2e93cb5ae96314247e97f3040fda36b6356 Reviewed By: b0noI Differential Revision: D37495906 fbshipit-source-id: 25dfcb5f8bbe61e5ff2da1c59810a6ebed1850c3
This PR...
Issues Found
Testing
Prims
_elementwise_metaoperation to accepts tensors that don't participate in type promotion, like thecondtensor inwhereUtils
Refs