Add Python decomposition for quantile/nanquantile to fix torch.export#174787
Add Python decomposition for quantile/nanquantile to fix torch.export#174787tugsbayasgalan wants to merge 2 commits intogh/tugsbayasgalan/120/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/174787
Note: Links to docs will display an error until the docs builds have been completed. ❌ 32 New Failures, 7 Unrelated FailuresAs of commit 9e9bae6 with merge base f5fbedb ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
…orch.export" The C++ CompositeImplicitAutograd kernel for quantile calls at::is_scalar_tensor_true (which uses at::equal, tagged data_dependent_output) for input validation. During torch.export, FakeTensor tracing cannot evaluate data-dependent ops, so export fails with DataDependentOutputException. Register Python decompositions with py_impl(CompositeImplicitAutograd) so the Python implementation runs instead of the C++ kernel during tracing. The Python version uses only sort, gather, lerp, and other ops that work with FakeTensors. Authored with Claude. [ghstack-poisoned]
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: inductor / unit-test / inductor-test / test (inductor_distributed, 1, 1, linux.g5.12xlarge.nvidia.gpu) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge -f "Lintrunner error is unrelated" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
change test as well? otherwise ci fails |
|
@pytorchbot revert -c nosignal -m "Looks like this is causing failures upstream see https://hud.pytorch.org/pytorch/pytorch/commit/4504c3dcee3c02886fae3340a8ee268717c6cb32" |
|
@pytorchbot successfully started a revert job. Check the current status here. |
…h.export (#174787)" This reverts commit 4504c3d. Reverted #174787 on behalf of https://github.com/seemethere due to Looks like this is causing failures upstream see https://hud.pytorch.org/pytorch/pytorch/commit/4504c3dcee3c02886fae3340a8ee268717c6cb32 ([comment](#174787 (comment)))
|
@tugsbayasgalan your PR has been successfully reverted. |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Stack from ghstack (oldest at bottom):
The C++ CompositeImplicitAutograd kernel for quantile calls
at::is_scalar_tensor_true (which uses at::equal, tagged
data_dependent_output) for input validation. During torch.export,
FakeTensor tracing cannot evaluate data-dependent ops, so export
fails with DataDependentOutputException.
Register Python decompositions with py_impl(CompositeImplicitAutograd)
so the Python implementation runs instead of the C++ kernel during
tracing. The Python version uses only sort, gather, lerp, and other
ops that work with FakeTensors.
Authored with Claude.