Skip to content

loss plugin: Fix indexing into a scalar#9143

Closed
rfejgin wants to merge 1 commit intopytorch:masterfrom
rfejgin:master
Closed

loss plugin: Fix indexing into a scalar#9143
rfejgin wants to merge 1 commit intopytorch:masterfrom
rfejgin:master

Conversation

@rfejgin
Copy link

@rfejgin rfejgin commented Jul 3, 2018

The loss plugin was using the old-style loss[0] access, which in PyTorch 0.4 and
later is an attempt to index into a scalar, generating a warning.
Replaced that with loss.item().

This fixes
#9142

The loss plugin was using the old-style loss[0] access, which in PyTorch 0.4 and
later is an attempt to index into a scalar, generating a warning.
Replaced that with loss.item().

This fixes
pytorch#9142
Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
The loss plugin was using the old-style loss[0] access, which in PyTorch 0.4 and
later is an attempt to index into a scalar, generating a warning.
Replaced that with loss.item().

This fixes
pytorch#9142
Closes pytorch#9143

Differential Revision: D8726403

Pulled By: ezyang

fbshipit-source-id: 6c496b140a74d22c8423f511db901b18615fd6fa
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants