In NumPy it accepts *operands, in PyTorch it accepts operands.
https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.einsum.html:
numpy.einsum(subscripts, *operands, ...)
https://pytorch.org/docs/master/torch.html?highlight=einsum#torch.einsum:
torch.einsum(equation, operands)
https://www.tensorflow.org/api_docs/python/tf/einsum:
tf.einsum(equation, *inputs, **kwargs)
Are there good reasons for keeping it different from NumPy / TensorFlow?