Update linalg.norm to match numpy's handling of degenerate inputs#168086
Update linalg.norm to match numpy's handling of degenerate inputs#168086rtimpe wants to merge 6 commits intogh/rtimpe/29/basefrom
Conversation
See numpy/numpy#28343 ghstack-source-id: 9b0aaf6 Pull-Request: #168086
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/168086
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit a42b931 with merge base 9760a63 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
See numpy/numpy#28343 ghstack-source-id: 720ce7a Pull-Request: #168086
See numpy/numpy#28343 ghstack-source-id: fb7526a Pull-Request: #168086
| _linalg_matrix_norm_checks(A, dim_, opt_dtype, /*low_precision*/abs_ord != 2.); | ||
|
|
||
| auto max_min = [ord, keepdim](const Tensor& A, int64_t dim) { return ord > 0 ? A.amax(dim, keepdim) : A.amin(dim, keepdim); }; | ||
| auto max_min_wrapper = [ord, max_min](const Tensor &A, int64_t dim) { |
There was a problem hiding this comment.
capture max_min by reference?
| run_test_case(input, ord, dim, keepdim) | ||
|
|
||
| # Test degenerate shape results match numpy for linalg.norm matrix norms | ||
| @skipIf(np.lib.NumpyVersion(np.__version__) < '2.3.0', 'Numpy changed handling of degenerate inputs in 2.3.0') |
There was a problem hiding this comment.
Should we preserve the old test for numpy < 2.3.0?
There was a problem hiding this comment.
The pytorch op will be different from the numpy version, but I can handle those specically in the old test
There was a problem hiding this comment.
If it's too annoying to do, feel free to skip doing this.
There was a problem hiding this comment.
I added as a separate test so we can easily delete it once older numpy versions get dropped. But there is a lot of duplicated code, let me know what you think
| auto new_shape(DimVector(A.sizes())); | ||
| auto dim_ = maybe_wrap_dim(dim, A.dim()); | ||
| new_shape[dim_] = 1; | ||
| auto zeros = at::zeros(new_shape, A.options()); | ||
| return max_min(zeros, dim); |
There was a problem hiding this comment.
Can this be optimized? I don't think we need to call max_min here since we're operating on a zero tensor.
There was a problem hiding this comment.
Yeah, I mostly wrote it this way so max_min would handle the keepdim logic, but I can switch it
See numpy/numpy#28343 ghstack-source-id: d97b6e9 Pull-Request: #168086
See numpy/numpy#28343 ghstack-source-id: ffadac1 Pull-Request: #168086
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / linux-jammy-rocm-py3.10 / test (default, 1, 2, linux.rocm.gpu.gfx942.1) Details for Dev Infra teamRaised by workflow job |
See numpy/numpy#28343 ghstack-source-id: 8c968de Pull-Request: #168086
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
See numpy/numpy#28343