Skip to content

[pt2] add metas for median ops#106272

Closed
nkaretnikov wants to merge 4 commits intogh/nkaretnikov/161/basefrom
gh/nkaretnikov/161/head
Closed

[pt2] add metas for median ops#106272
nkaretnikov wants to merge 4 commits intogh/nkaretnikov/161/basefrom
gh/nkaretnikov/161/head

Conversation

@nkaretnikov
Copy link
Copy Markdown
Collaborator

@nkaretnikov nkaretnikov commented Jul 30, 2023

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jul 30, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106272

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3a4c698:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@nkaretnikov
Copy link
Copy Markdown
Collaborator Author

From the list in #105105

@nkaretnikov nkaretnikov added ciflow/trunk Trigger trunk jobs on your pull request topic: not user facing topic category ciflow/inductor labels Jul 30, 2023
@register_meta(aten.nanmedian.default)
def meta_nanmedian(input):
@register_meta([aten.median.default, aten.nanmedian.default])
def meta_median(input):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

median is the same as nanmedian without NaN handling

def meta_nanmedian_dim(input, dim=-1, keepdim=False):
def meta_median_dim(input, dim=-1, keepdim=False):
if device_hint(input) == "cuda":
utils.alert_not_deterministic("median CUDA with indices output")
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repro (this alert is only on CUDA):

PYTORCH_TEST_WITH_INDUCTOR=1  python -bb test/test_torch.py -v --use-pytest --import-slow-tests --import-disabled-tests -k test_nondeterministic_alert_median -v --capture=no

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is kind of weird, we shouldn't be on the hook for reporting non-determinism in meta kernels...

@nkaretnikov
Copy link
Copy Markdown
Collaborator Author

nkaretnikov commented Jul 31, 2023

The XLA failure is real (counters) and will require updating tests on the XLA side.

nkaretnikov added a commit to pytorch/xla that referenced this pull request Jul 31, 2023
nkaretnikov added a commit to pytorch/xla that referenced this pull request Jul 31, 2023
@nkaretnikov
Copy link
Copy Markdown
Collaborator Author

nkaretnikov commented Jul 31, 2023

Updated XLA pin to point to companion PR: pytorch/xla#5376

@nkaretnikov nkaretnikov marked this pull request as ready for review July 31, 2023 14:02
@nkaretnikov nkaretnikov requested review from a team, Chillee and ezyang as code owners July 31, 2023 14:02
@nkaretnikov
Copy link
Copy Markdown
Collaborator Author

nkaretnikov commented Jul 31, 2023

XLA people told me the issue on the XLA side needs to be fixed in a different way. Converting to a draft, so it's not merged by accident.

UPD: The XLA team created a new PR that switches to a different function to test the fallback. Waiting until that one is merged. pytorch/xla#5393

UPD2: Updated XLA hash.

@nkaretnikov nkaretnikov marked this pull request as draft July 31, 2023 18:20
pytorchmergebot pushed a commit that referenced this pull request Aug 3, 2023
Pull Request resolved: #106273
Approved by: https://github.com/ezyang
ghstack dependencies: #106272
@facebook-github-bot facebook-github-bot deleted the gh/nkaretnikov/161/head branch August 6, 2023 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants