-
Notifications
You must be signed in to change notification settings - Fork 27.2k
Closed
Description
tensor.new() does not initialize memory, so it could end up containing nan's. This could be unsafe in some cases.
For example, in torch/autograd/variable.py:
def mm(self, matrix):
output = Variable(self.data.new(self.data.size(0), matrix.data.size(1)))
return self._static_blas(Addmm, (output, 0, 1, self, matrix), False)
def bmm(self, batch):
output = Variable(self.data.new(self.data.size(0), self.data.size(1),
batch.data.size(2)))
return self._static_blas(Baddbmm, (output, 0, 1, self, batch), False)
def mv(self, vector):
output = Variable(self.data.new(self.data.size(0)))
return self._static_blas(Addmv, (output, 0, 1, self, vector), False)
def ger(self, vector):
output = Variable(self.data.new(self.data.size(0), vector.data.size(0)))
return self._static_blas(Addr, (output, 0, 1, self, vector), False)
is dangerous because output could contain a nan, and even though alpha is being set to 0, nan * 0 = nan and the result could contain a nan (I had an optim test failing because of a nan originating this way).
I haven't done an exhaustive search, so there may be other places in the code that could have this issue.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels