functionalization: add native fill() op#76084
functionalization: add native fill() op#76084bdhirsh wants to merge 9 commits intogh/bdhirsh/210/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 1788d58 (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
[ghstack-poisoned]
| 'zero', # only used by the functionalization pass, doesn't need to be exposed to python | ||
| 'copy', # only used by the functionalization pass | ||
| 'fill.Tensor', # only used by the functionalization pass | ||
| 'fill.Scalar', # only used by the functionalization pass |
There was a problem hiding this comment.
This info should probably get moved into native_functions.yaml at some point
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
|
@pytorchbot merge this please |
|
Merge failed due to Command Raised by https://github.com/pytorch/pytorch/actions/runs/2222996651 |
Summary: Pull Request resolved: #76084 Approved by: https://github.com/ezyang Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/ea5209c9fd78a34849600875d0f3f5b994201833 Reviewed By: osalpekar Differential Revision: D35938183 Pulled By: bdhirsh fbshipit-source-id: 73c318c03e49671cdaee997807e2dba2a0c57198
Addresses
fill_issue in pytorch/torchdynamo#88Adding out-of-place
fill.Tensorandfill.Scalarops, that wayfill_()can be properly functionalized.I ended up giving
filla derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back intofill_(), so we don't want to run that decomposition as part of tracing.Stack from ghstack: