The following code:
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(1, 1, 1, 1),
nn.ReLU(),
nn.Conv2d(1, 1, 1, 1),
)
def forward(self, v_x):
return self.main(v_x).view(v_x.size(0), 1)
net = Network()
net.cuda()
v_in = Variable(torch.Tensor(2,1,1,1).cuda(), requires_grad=True)
grad_out = Variable(torch.ones(2,1,1,1).cuda())
gradient = autograd.grad(outputs=net(v_in), inputs=v_in,
grad_outputs=grad_out,
create_graph=True, retain_graph=True,
only_inputs=True)[0]
gradient.mean().backward()
produces this error
RuntimeError: Expected a Tensor of type Variable[torch.cuda.FloatTensor] but found an undefined Tensor for argument #0 'grad_output'
Also, removing the ReLU from the network:
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(1, 1, 1, 1),
nn.Conv2d(1, 1, 1, 1),
)
def forward(self, v_x):
return self.main(v_x).view(v_x.size(0), 1)
causes the error to change to:
RuntimeError: ConvNdBackward: expected Variable at argument 0 (got None)
However, when the second convolution is removed as well:
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(1, 1, 1, 1),
)
def forward(self, v_x):
return self.main(v_x).view(v_x.size(0), 1)
then the code runs error free
I'm on a ppc64le architecture with a P100 GPU, CUDA 8.0 and PyTorch version '0.4.0a0+6eca9e0'
The following code:
produces this error
Also, removing the ReLU from the network:
causes the error to change to:
However, when the second convolution is removed as well:
then the code runs error free
I'm on a ppc64le architecture with a P100 GPU, CUDA 8.0 and PyTorch version '0.4.0a0+6eca9e0'