Currently, PyTorch/XLA will crash the entire runtime (ret check failure?) if you do an int64 dot product on a TPU. Seems like poor UX to me. It'd be preferable to raise a proper Python exception rather than crashing the whole runtime. This is particularly problematic for folks using PyTorch/XLA in a Jupyter notebook, because it'll crash the whole notebook.
> import torch
> import torch_xla.core.xla_model as xm
> dev = xm.xla_device()
> t1 = torch.tensor([1,2,3], device=dev)
> t2 = torch.tensor([3,4,5], device=dev)
> t1 @ t2
Non-OK-status: status.status() status: UNIMPLEMENTED: While rewriting computation to not contain X64 element types, XLA encountered an HLO for which this rewriting is not implemented: %dot.3 = s64[] dot(s64[3]{0} %p1.2, s64[3]{0} %p0.1), lhs_contracting_dims={0}, rhs_contracting_dims={0}
Issue description
Currently, PyTorch/XLA will crash the entire runtime (ret check failure?) if you do an int64 dot product on a TPU. Seems like poor UX to me. It'd be preferable to raise a proper Python exception rather than crashing the whole runtime. This is particularly problematic for folks using PyTorch/XLA in a Jupyter notebook, because it'll crash the whole notebook.
Code example
System Info