Resolve changes in tensor/variable API with PyTorch master#824
Resolve changes in tensor/variable API with PyTorch master#824fritzo merged 5 commits intopyro-ppl:devfrom
Conversation
|
PyTorch wheels are updated; running tests against the updated wheel now |
fritzo
left a comment
There was a problem hiding this comment.
Thanks for doing this, Neeraj!
My comments are honest questions, and I'm fine if they result in no changes, only edifying answers 🙂 .
| # misleading, as it incorrectly suggests objects occlude one | ||
| # another. | ||
| clipped = np.clip(imgarr.data.cpu().numpy(), 0, 1) | ||
| clipped = np.clip(imgarr.detach().cpu().numpy(), 0, 1) |
There was a problem hiding this comment.
Is it safe to replace .detach().cpu().numpy() with .cpu().numpy() everywhere?
There was a problem hiding this comment.
That's what I started with. If the node has requires_grad=True, it will throw an exception when we convert it to numpy without detaching.
| logits = Variable(logits) | ||
| ix = dist.Categorical(logits=logits).sample() | ||
| return traces[ix.data[0]] | ||
| return traces[ix] |
There was a problem hiding this comment.
Why not ix.item() here? (This may not be perfectly tested)
There was a problem hiding this comment.
Because a scalar is a valid index value. We can change it ix.item() but it's not necessary.
There was a problem hiding this comment.
This is great! It will be so much easier to write mixture models now.
There was a problem hiding this comment.
Completely; scalars will simplify a number of clunky looking stuff.
| (cost + cost.detach() * dist.score_parts(z)[1]).backward() | ||
| mean_alpha_grad = alphas.grad.data.mean() | ||
| mean_beta_grad = betas.grad.data.mean() | ||
| mean_alpha_grad = alphas.grad.data.mean().item() |
There was a problem hiding this comment.
Why not simply alphas.grad.mean().item()?
There was a problem hiding this comment.
Yup, in this case we can just use that. Will change here and other patterns that I see.
|
@neerajprad has #780 been merged into this branch? I believe that introduces some new helpers like |
|
Update README.md to recommend installing PyTorch commit |
3b8243c to
d353f22
Compare
Eeks..I always forget that. Thanks for the reminder! |
Thanks for the heads up. Will take a look and update. |
|
Ready to merge, unless there are any further pending comments. |
|
@neerajprad I would like to know about the changes on this. As a summary, we will:
If the points 3 and 4 are correct, then we should not change |
|
Thanks for the summary, @fehiepsi.
Interesting, didn't know about this.
Is this new behavior, or planned for the future?
Still waiting on these changes in PyTorch master. We will use |
|
@neerajprad That it my mistake. :( I will ask on PyTorch's slack what is the difference between |
|
@neerajprad As answered by Adam, they are very similar except that |
This resolves a few minor issues so as to keep in sync with PyTorch master.
tensor[0]returns aVariablescalar instead of a python numeric type. We need to use.item()to convert a PyTorch scalar into a python numeric type..datareturns an instance of typeVariableand nottorch.Tensor. Instead of using.data.cpu().numpy(), we are now using.detach().cpu().numpy().NOTE: Tests will fail until we update the PyTorch wheels for travis. The core changes are really small, so would suggest reviewing only once the tests pass.
Only partially resolves #815. We will need more renaming changes later.