Skip to content

Update linalg.norm to match numpy's handling of degenerate inputs#168086

Closed
rtimpe wants to merge 6 commits intogh/rtimpe/29/basefrom
gh/rtimpe/29/head
Closed

Update linalg.norm to match numpy's handling of degenerate inputs#168086
rtimpe wants to merge 6 commits intogh/rtimpe/29/basefrom
gh/rtimpe/29/head

Conversation

@rtimpe
Copy link
Collaborator

@rtimpe rtimpe commented Nov 18, 2025

[ghstack-poisoned]
rtimpe added a commit that referenced this pull request Nov 18, 2025
@pytorch-bot pytorch-bot bot added the release notes: linalg_frontend release notes category label Nov 18, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/168086

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 1 Pending

As of commit a42b931 with merge base 9760a63 (image):
💚 Looks good so far! There are no failures yet. 💚

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

[ghstack-poisoned]
rtimpe added a commit that referenced this pull request Nov 18, 2025
[ghstack-poisoned]
rtimpe added a commit that referenced this pull request Nov 18, 2025
@rtimpe rtimpe marked this pull request as ready for review November 18, 2025 20:00
@rtimpe rtimpe requested a review from williamwen42 November 18, 2025 20:00
_linalg_matrix_norm_checks(A, dim_, opt_dtype, /*low_precision*/abs_ord != 2.);

auto max_min = [ord, keepdim](const Tensor& A, int64_t dim) { return ord > 0 ? A.amax(dim, keepdim) : A.amin(dim, keepdim); };
auto max_min_wrapper = [ord, max_min](const Tensor &A, int64_t dim) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

capture max_min by reference?

run_test_case(input, ord, dim, keepdim)

# Test degenerate shape results match numpy for linalg.norm matrix norms
@skipIf(np.lib.NumpyVersion(np.__version__) < '2.3.0', 'Numpy changed handling of degenerate inputs in 2.3.0')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we preserve the old test for numpy < 2.3.0?

Copy link
Collaborator Author

@rtimpe rtimpe Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pytorch op will be different from the numpy version, but I can handle those specically in the old test

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's too annoying to do, feel free to skip doing this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added as a separate test so we can easily delete it once older numpy versions get dropped. But there is a lot of duplicated code, let me know what you think

Comment on lines +2906 to +2910
auto new_shape(DimVector(A.sizes()));
auto dim_ = maybe_wrap_dim(dim, A.dim());
new_shape[dim_] = 1;
auto zeros = at::zeros(new_shape, A.options());
return max_min(zeros, dim);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be optimized? I don't think we need to call max_min here since we're operating on a zero tensor.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I mostly wrote it this way so max_min would handle the keepdim logic, but I can switch it

[ghstack-poisoned]
rtimpe added a commit that referenced this pull request Nov 19, 2025
[ghstack-poisoned]
rtimpe added a commit that referenced this pull request Nov 19, 2025
@rtimpe
Copy link
Collaborator Author

rtimpe commented Nov 19, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 19, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / linux-jammy-rocm-py3.10 / test (default, 1, 2, linux.rocm.gpu.gfx942.1)

Details for Dev Infra team Raised by workflow job

[ghstack-poisoned]
rtimpe added a commit that referenced this pull request Nov 19, 2025
@rtimpe
Copy link
Collaborator Author

rtimpe commented Nov 19, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@github-actions github-actions bot deleted the gh/rtimpe/29/head branch December 20, 2025 02:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: linalg_frontend release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants