Noticed this as I tried to use the CosineEmbeddingLoss with a model copied to the GPU.
import torch
from torch.autograd import Variable
from torch.nn._functions.loss import CosineEmbeddingLoss
input1 = Variable(torch.rand(5,10))
input1 = input1.cuda()
input2 = Variable(torch.rand(5,10))
input2 = input2.cuda()
y = Variable(torch.FloatTensor([1.0] * input1.size()[0]))
loss = CosineEmbeddingLoss()
loss(input1, input2, y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/loss.py", line 41, in forward
torch.eq(y, -1, out=_idx)
TypeError: torch.eq received an invalid combination of arguments - got (torch.FloatTensor, int, out=torch.cuda.ByteTensor), but expected one of:
* (torch.FloatTensor tensor, float value, *, torch.FloatTensor out)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
* (torch.FloatTensor tensor, torch.FloatTensor other, *, torch.FloatTensor out)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
* (torch.FloatTensor tensor, float value, *, torch.ByteTensor out)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
* (torch.FloatTensor tensor, torch.FloatTensor other, *, torch.ByteTensor out)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
My default assumption is that I'm using cuda() wrong in some way. Thoughts?
Noticed this as I tried to use the CosineEmbeddingLoss with a model copied to the GPU.
My default assumption is that I'm using cuda() wrong in some way. Thoughts?