Skip to content

Does CosineEmbeddingLoss support CUDA tensors? #2344

@sderygithub

Description

@sderygithub

Noticed this as I tried to use the CosineEmbeddingLoss with a model copied to the GPU.

import torch
from torch.autograd import Variable
from torch.nn._functions.loss import CosineEmbeddingLoss

input1 = Variable(torch.rand(5,10))
input1 = input1.cuda()

input2 = Variable(torch.rand(5,10))
input2 = input2.cuda()

y = Variable(torch.FloatTensor([1.0] * input1.size()[0]))

loss = CosineEmbeddingLoss()
loss(input1, input2, y)

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/loss.py", line 41, in forward
    torch.eq(y, -1, out=_idx)
TypeError: torch.eq received an invalid combination of arguments - got (torch.FloatTensor, int, out=torch.cuda.ByteTensor), but expected one of:
 * (torch.FloatTensor tensor, float value, *, torch.FloatTensor out)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
 * (torch.FloatTensor tensor, torch.FloatTensor other, *, torch.FloatTensor out)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
 * (torch.FloatTensor tensor, float value, *, torch.ByteTensor out)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)
 * (torch.FloatTensor tensor, torch.FloatTensor other, *, torch.ByteTensor out)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, int, out=torch.cuda.ByteTensor)

My default assumption is that I'm using cuda() wrong in some way. Thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions