Support for using a negative dimension in torch.stack currently works as follows:
if dim < 0:
dim += sequence[0].dim()
This is not consistent with the behaviour of torch.unsqueeze, which uses dim += input.dim() + 1. The unsqueeze behaviour is better since it is possible to use negative indexing to insert a new dimension as the last dimension.
Example
torch.stack with dim=-1 adds a new dimension in the second-to-last position. This is confusing. It is not possible to use negative indexing to add a dimension in the last position (ie can't achieve a 3x2 result).
>>> a = torch.Tensor([1, 2, 3])
>>> b = torch.Tensor([4, 5, 6])
>>> torch.stack([a, b], -1)
1 2 3
4 5 6
[torch.FloatTensor of size 2x3]
Contrast this to using torch.unsqueeze and torch.cat with dim=-1, which adds a new dimension in the last position:
>>> a = torch.Tensor([1, 2, 3])
>>> b = torch.Tensor([4, 5, 6])
>>> torch.cat([a.unsqueeze(-1), b.unsqueeze(-1)], -1)
1 4
2 5
3 6
[torch.FloatTensor of size 3x2]