Conversation
|
Needs a test! |
torch/nn/functional.py
Outdated
| log_input: :math:`log(input)` | ||
| target: :math:`target \sim Pois(input)` | ||
| """ | ||
| loss = torch.exp(log_input) - target * log_input |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@fmassa, PR is ready for review. But |
|
@apaszke, could you comment on what is written above? |
|
@vlasenkov using an |
|
|
|
ping @fmassa |
|
Can anyone merge this loss? Does it still need any fixes? |
|
I think this looks good, thanks! |
|
thanks @vlasenkov ! |
Mainly wanted to confirm torchrun works fine with dynamo/ddp, but it is also a better system than manually launching processes. Partially addresses issue #1779 New run commands ------------ single process: python benchmarks/dynamo/distributed.py [args] multi-gpu (e.g. 2 gpu on one host): torchrun --nproc_per_node 2 benchmarks/dynamo/distributed.py [args] Pull Request resolved: #89149 Approved by: https://github.com/aazzolini
Mainly wanted to confirm torchrun works fine with dynamo/ddp, but it is also a better system than manually launching processes. Partially addresses issue pytorch#1779 New run commands ------------ single process: python benchmarks/dynamo/distributed.py [args] multi-gpu (e.g. 2 gpu on one host): torchrun --nproc_per_node 2 benchmarks/dynamo/distributed.py [args] Pull Request resolved: pytorch#89149 Approved by: https://github.com/aazzolini
Resolves #1774.