[MPS] Add support for two more isin variants#154010
[MPS] Add support for two more isin variants#154010malfet wants to merge 5 commits intogh/malfet/350/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154010
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 72 PendingAs of commit 64c8b36 with merge base 2e56ce0 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq `isin_Scalar_Tensor_out` redispatches back to generic `isin` op Add unittests to validate that Before this change both of those failed ```python >>> import torch >>> t = torch.tensor([0, 1, 2], device='mps') >>> torch.isin(t, 1) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. >>> torch.isin(1, t) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. ``` ghstack-source-id: 9c8eb66 Pull Request resolved: #154010
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq `isin_Scalar_Tensor_out` redispatches back to generic `isin` op Add unittests to validate that Before this change both of those failed ```python >>> import torch >>> t = torch.tensor([0, 1, 2], device='mps') >>> torch.isin(t, 1) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. >>> torch.isin(1, t) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. ``` ghstack-source-id: 123db4e Pull Request resolved: #154010
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq `isin_Scalar_Tensor_out` redispatches back to generic `isin` op Add unittests to validate that Before this change both of those failed ```python >>> import torch >>> t = torch.tensor([0, 1, 2], device='mps') >>> torch.isin(t, 1) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. >>> torch.isin(1, t) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. ``` ghstack-source-id: 99c45ae Pull Request resolved: #154010
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq `isin_Scalar_Tensor_out` redispatches back to generic `isin` op Add unittests to validate that Before this change both of those failed ```python >>> import torch >>> t = torch.tensor([0, 1, 2], device='mps') >>> torch.isin(t, 1) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. >>> torch.isin(1, t) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. ``` ghstack-source-id: 96faf9a Pull Request resolved: #154010
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq `isin_Scalar_Tensor_out` redispatches back to generic `isin` op Add unittests to validate that Before this change both of those failed ```python >>> import torch >>> t = torch.tensor([0, 1, 2], device='mps') >>> torch.isin(t, 1) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. >>> torch.isin(1, t) Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on #141287 and mention use-case, that resulted in missing op as well as commit hash 3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. ``` ghstack-source-id: e60527d Pull Request resolved: #154010
|
@pytorchbot merge -f "Roses are red, violets are blue, I want to land this PR now" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
isin_Tensor_Scalar_outis just a redispatch to eq/neqisin_Scalar_Tensor_outredispatches back to genericisinop, but needs a small tweak to handle float scalarsMake sure that
outis resized to an expected value inisin_Tensor_Tensor_out_mpsAdd unittests to validate that, but skip them on MacOS-13, where MPS op just returns garbage
Before this change both of those failed
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov