The array API specification stipulates the following operators that are currently not supported by PyTorch:
__dlpack__
__dlpack_device__
__rand__ (Support __rand__, __ror__ and __rxor__ #58120 )
__rlshift__ (Support __rlshift__ and __rrshift__ #58121 , Support torch.bitwise_{left/right}_shift and __rlshift__, __rrshift__ #59544 )
__imatmul__ - not doing __imatmul__ because matmul doesn't always preserve shape, and still has a resize bug (see Micro-optimisations for matmul #64387 )
__rmatmul__ (Some torch.Tensor operators don’t return NotImplemented for invalid inputs #57719 , Fix some tensor operators to return NotImplemented for invalid inputs #58216 )
__rmod__ (Support __rmod__ #58035 , Support __rmod__ #58476 )
__ror__ (Support __rand__, __ror__ and __rxor__ #58120 )
__rrshift__ (Support __rlshift__ and __rrshift__ #58121 , Support torch.bitwise_{left/right}_shift and __rlshift__, __rrshift__ #59544 )
__rxor__ (Support __rand__, __ror__ and __rxor__ #58120 )
from_dlpack (implemented, but not exposed yet in the torch namespace in main namespace in 1.10.0)
broadcast_arrays
bitwise_left_shift (Support torch.bitwise_{left/right}_shift and __rlshift__, __rrshift__ #59544 )
bitwise_invert
bitwise_right_shift (Support torch.bitwise_{left/right}_shift and __rlshift__, __rrshift__ #59544 )
vecdot ([Array API] Add linalg.vecdot #70542 )
concat (we should be sure torch.cat is compliant and alias concat to it) (Support concat alias to cat #61767 )
expand_dims (Support expand_dims #57116 )
__ipow__ (Connect Tensor.__ipow__ to pow_ method #76900 )
to_device
astype
unique_(all|counts|inverse|values) unique should be split into four partial functions #70920
linalg.diagonal [Array API] Add linalg.diagonal #70599
linalg.matrix_transpose, matrix_transpose
linalg.outer Support torch.linalg.outer #63293
linalg.tensordot Create linalg.tensordot #63478
linalg.trace Support torch.linalg.trace #62714
linalg.vecdot [Array API] Add linalg.vecdot #70542
cc @mruberry @rgommers @pmeier
The array API specification stipulates the following operators that are currently not supported by PyTorch:
__dlpack____dlpack_device____rand__(Support__rand__,__ror__and__rxor__#58120)__rlshift__(Support__rlshift__and__rrshift__#58121, Supporttorch.bitwise_{left/right}_shiftand__rlshift__,__rrshift__#59544)- not doing__imatmul____imatmul__becausematmuldoesn't always preserve shape, and still has a resize bug (see Micro-optimisations for matmul #64387)__rmatmul__(Sometorch.Tensoroperators don’t returnNotImplementedfor invalid inputs #57719, Fix some tensor operators to returnNotImplementedfor invalid inputs #58216)__rmod__(Support__rmod__#58035, Support__rmod__#58476)__ror__(Support__rand__,__ror__and__rxor__#58120)__rrshift__(Support__rlshift__and__rrshift__#58121, Supporttorch.bitwise_{left/right}_shiftand__rlshift__,__rrshift__#59544)__rxor__(Support__rand__,__ror__and__rxor__#58120)from_dlpack(implemented, but not exposed yet in thein main namespace intorchnamespace1.10.0)broadcast_arraysbitwise_left_shift(Supporttorch.bitwise_{left/right}_shiftand__rlshift__,__rrshift__#59544)bitwise_invertbitwise_right_shift(Supporttorch.bitwise_{left/right}_shiftand__rlshift__,__rrshift__#59544)vecdot([Array API] Add linalg.vecdot #70542)concat(we should be sure torch.cat is compliant and alias concat to it) (Supportconcatalias tocat#61767)expand_dims(Supportexpand_dims#57116)__ipow__(Connect Tensor.__ipow__ to pow_ method #76900)to_deviceastypeunique_(all|counts|inverse|values)uniqueshould be split into four partial functions #70920linalg.diagonal[Array API] Add linalg.diagonal #70599linalg.matrix_transpose,matrix_transposelinalg.outerSupporttorch.linalg.outer#63293linalg.tensordotCreate linalg.tensordot #63478linalg.traceSupporttorch.linalg.trace#62714linalg.vecdot[Array API] Add linalg.vecdot #70542cc @mruberry @rgommers @pmeier