In order for PyTorch/XLA to support the PyTorch core ATen opset, it requires lowering each core ATen op in PyTorch/XLA. This issue is used to track the PyTorch/XLA lowering for aten_pow_Tensor_Tensor.
Here are some general guidelines to lowering this op:
- Uncomment
@unittest.skip or @unittest.expectFailure and run the unit test at test_core_aten_ops.py. Eg: pytest test/test_core_aten_ops.py -k test_aten_pow_Tensor_Tensor_0
- Make code changes until the test passes. Read and follow fix_lowering_for_core_aten_ops.md for ideas to fix.
- There may be multiple unit tests for a single op. For this op, the corresponding unit tests are:
- test_aten_pow_Tensor_Tensor_0
- test_aten_pow_Tensor_Tensor_1
- test_aten_pow_Tensor_Tensor_2
- Please also uncomment the skips for all these tests and ensure all tests are fixed.
- Note that sometimes the fix may be to fix the unit tests itself. Please take a look at the corresponding unit tests to make sure the tests are valid.
- Submit the PR!
For any questions, feel free to leave a comment in this PR.
In order for PyTorch/XLA to support the PyTorch core ATen opset, it requires lowering each core ATen op in PyTorch/XLA. This issue is used to track the PyTorch/XLA lowering for aten_pow_Tensor_Tensor.
Here are some general guidelines to lowering this op:
@unittest.skipor@unittest.expectFailureand run the unit test at test_core_aten_ops.py. Eg:pytest test/test_core_aten_ops.py -k test_aten_pow_Tensor_Tensor_0For any questions, feel free to leave a comment in this PR.