Skip to content

If none of the schema match, add ImplicitTensorToNum conversions where needed.#10180

Closed
eellison wants to merge 6 commits intopytorch:masterfrom
eellison:insert_tensor_to_num
Closed

If none of the schema match, add ImplicitTensorToNum conversions where needed.#10180
eellison wants to merge 6 commits intopytorch:masterfrom
eellison:insert_tensor_to_num

Conversation

@eellison
Copy link
Contributor

@eellison eellison commented Aug 2, 2018

When matching schema, first try to match without adding TensorToNum conversions. Then make another pass where TensorToNum conversions are allowed.

@eellison eellison force-pushed the insert_tensor_to_num branch from 62dc1ba to f38910e Compare August 2, 2018 21:24
@zdevito zdevito added the oncall: jit Add this issue/PR to JIT oncall triage queue label Aug 2, 2018
@eellison eellison force-pushed the insert_tensor_to_num branch from f38910e to c465c4c Compare August 3, 2018 00:21
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Structure looks good. A few small comments.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@eellison eellison force-pushed the insert_tensor_to_num branch from ed5bb93 to ef06dfa Compare August 3, 2018 22:59
@apaszke
Copy link
Contributor

apaszke commented Aug 6, 2018

Hmmm do we really want this patch? IIRC the rules for those casts in the Python frontend are quite complex at the moment. In particular, a tensor can be converted to a scalar argument if and only if it doesn't require grad, but we don't have this information statically (maybe except if we know that it's integral, but that's not the usual case in our frontend anyway).

@yf225
Copy link
Contributor

yf225 commented Aug 14, 2018

@zdevito @apaszke What should be the next step for this PR?

@zdevito
Copy link
Contributor

zdevito commented Aug 15, 2018

@apaszke I think we should add this (otherwise we require a bunch of extra casts). However, we should keep the dynamic behavior the same. That is, TensorToNum should check requires grad and throw.

Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments inline. We should strive to match the behavior of the python frontend for all num<->tensor conversions. In the case of TensorToNum we should throw an exception at runtime if we attempt to convert a requires_grad tensor.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@eellison eellison force-pushed the insert_tensor_to_num branch 2 times, most recently from 7e0554a to 0fa8d2f Compare August 18, 2018 00:15
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make a few changes here before merging:

  • We should only have 1 TensorToNum that always does checking.
  • Shape analysis should not throw exceptions

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@eellison eellison force-pushed the insert_tensor_to_num branch 3 times, most recently from 2ae764f to e7b68d2 Compare August 21, 2018 18:01
@eellison eellison changed the title If none of the schema match, add TensorToNum conversions where needed. If none of the schema match, add ImplicitTensorToNum conversions where needed. Aug 21, 2018
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Ship it.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@eellison eellison force-pushed the insert_tensor_to_num branch from e7b68d2 to caeb1d9 Compare August 21, 2018 22:32
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@eellison eellison force-pushed the insert_tensor_to_num branch from caeb1d9 to c0e86b6 Compare August 22, 2018 18:20
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@eellison eellison force-pushed the insert_tensor_to_num branch from c0e86b6 to 85876cf Compare August 24, 2018 01:37
Elias Ellison added 5 commits August 24, 2018 09:16
…ed to ints/floats as arguments if they are 0 dim & no grad is set. A new operator has been added to implement this, since these conditions are not all known at compile time.
@eellison eellison force-pushed the insert_tensor_to_num branch from 85876cf to b9a33ed Compare August 24, 2018 16:36
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eellison is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
…e needed. (pytorch#10180)

Summary:
When matching schema, first try to match without adding TensorToNum conversions. Then make another pass where TensorToNum conversions are allowed.
Pull Request resolved: pytorch#10180

Differential Revision: D9438153

Pulled By: eellison

fbshipit-source-id: 80541b5abd06e9d4187e89dda751f44dab6f58c5
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants