OpInfo: Remove promotes_integers_to_float and infer it instead#50279
OpInfo: Remove promotes_integers_to_float and infer it instead#50279peterbell10 wants to merge 10 commits intogh/peterbell10/35/basefrom
Conversation
This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 7a5be92 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. ghstack-source-id: 7fca427 Pull Request resolved: pytorch#50279
|
Implementing dunder repr for SampleInputs and more metadata for whether the function implements safe casting sounds good. Do we really want to get rid of the metadata for whether the operator promotes integers to float, though? That seems useful. I understand that with the changes to torch.divide(...) we have a function that conditionally promotes integers to floating point. Is a better way to model this function that torch.divide(..., mode=X) is a different function from torch.divide(..., mode=Y), though? |
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
|
Note that either way, |
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
Cool; I'll take a look. Another option would be to provide sugars for common cases (like int to float promotion), but also allowing OpInfos to override with an arbitrary mapping of inputs -> expected result dtype. |
|
From our conversation and looking at the latest I'm convinced. It is a bit of a pain that the op has to be run to determine the output dtype, but c'est la vie. I think, however, we can use torch.can_cast and that should fix the failing builds. |
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
|
@mruberry PTAL, |
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. [ghstack-poisoned]
|
Hey @peterbell10, unfortunately the internal tools can't figure out to merge. I know these rebases are out of control, but would you mind rebasing this so I can reimport it? |
…tead" This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. Differential Revision: [D25950011](https://our.internmc.facebook.com/intern/diff/D25950011) [ghstack-poisoned]
|
@mruberry rebased. |
…ch#50279) Summary: Pull Request resolved: pytorch#50279 This allows different sample inputs to have different behavior for the same operator. For example, `div(..., rounding_mode='true')` will promote but other rounding modes don't. The current boolean flag is too restrictive to allow this. Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D25950011 Pulled By: mruberry fbshipit-source-id: 7e82b82bedc626b2b6970d92d5b25676183ec384
Stack from ghstack:
This allows different sample inputs to have different behavior for the same
operator. For example,
div(..., rounding_mode='true')will promote but otherrounding modes don't. The current boolean flag is too restrictive to allow this.
Differential Revision: D25950011