Skip to content

adds poisson NLL loss#1779

Merged
soumith merged 14 commits intopytorch:masterfrom
vlasenkov:poisson-loss
Jun 27, 2017
Merged

adds poisson NLL loss#1779
soumith merged 14 commits intopytorch:masterfrom
vlasenkov:poisson-loss

Conversation

@vlasenkov
Copy link
Contributor

Resolves #1774.

@ezyang
Copy link
Contributor

ezyang commented Jun 13, 2017

Needs a test!

@vlasenkov
Copy link
Contributor Author

@ezyang, @fmassa, some remarks?

log_input: :math:`log(input)`
target: :math:`target \sim Pois(input)`
"""
loss = torch.exp(log_input) - target * log_input

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@vlasenkov
Copy link
Contributor Author

vlasenkov commented Jun 18, 2017

@fmassa, PR is ready for review.
It's usual for target to be 2 or 3, and when summing loss for a large sample the error of approximation grows, so I wanted to add optional ability to compute exact log(target!) for small target values. This could be implemented as:

mask = target <= 10
loss[mask] += target[mask].apply_(lambda x: math.log(math.factorial(x)))
loss[~mask] += ...  # Stirling approximation

But autograd.Variable has no apply member. This would not have impact on automatic differentiation because we do not need to compute derivative w.r.t target. Are there some workarounds?

@vlasenkov
Copy link
Contributor Author

@apaszke, could you comment on what is written above?

@fmassa
Copy link
Member

fmassa commented Jun 19, 2017

@vlasenkov using an apply_ function would be extremely slow on the GPU, so I wouldn't really advise on that. I'll have a look at your latest changes.
On a quick look, I think it looks ok. But I wonder if you don't have problems with autograd in loss[mask] += ... which complains that you can't use an in-place operation there (I had sometimes to do things like x[1:10] = x[1:10] + y instead of x[1:10] += y for it to work).

@vlasenkov
Copy link
Contributor Author

loss[mask] += ... works without problem. I've tested on OS X (cpu) and Ubuntu (cpu & gpu)

@vlasenkov
Copy link
Contributor Author

ping @fmassa

@soumith soumith added the ready label Jun 22, 2017
@vlasenkov
Copy link
Contributor Author

Can anyone merge this loss? Does it still need any fixes?

@fmassa
Copy link
Member

fmassa commented Jun 27, 2017

I think this looks good, thanks!
@vlasenkov could you add link to the docs in https://github.com/pytorch/pytorch/blob/master/docs/source/nn.rst? If not they won't show in the website.

@soumith soumith merged commit ae61f3f into pytorch:master Jun 27, 2017
@soumith soumith removed the ready label Jun 27, 2017
@soumith
Copy link
Collaborator

soumith commented Jun 27, 2017

thanks @vlasenkov !

@vlasenkov vlasenkov deleted the poisson-loss branch July 16, 2017 13:30
@nailimixaM nailimixaM mentioned this pull request Nov 27, 2020
pytorchmergebot pushed a commit that referenced this pull request Nov 16, 2022
Mainly wanted to confirm torchrun works fine with dynamo/ddp,
but it is also a better system than manually launching processes.

Partially addresses issue #1779

New run commands
------------

single process:
python benchmarks/dynamo/distributed.py [args]

multi-gpu (e.g. 2 gpu on one host):
torchrun --nproc_per_node 2 benchmarks/dynamo/distributed.py [args]

Pull Request resolved: #89149
Approved by: https://github.com/aazzolini
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
Mainly wanted to confirm torchrun works fine with dynamo/ddp,
but it is also a better system than manually launching processes.

Partially addresses issue pytorch#1779

New run commands
------------

single process:
python benchmarks/dynamo/distributed.py [args]

multi-gpu (e.g. 2 gpu on one host):
torchrun --nproc_per_node 2 benchmarks/dynamo/distributed.py [args]

Pull Request resolved: pytorch#89149
Approved by: https://github.com/aazzolini
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Poisson NLL loss

4 participants