Skip to content
This repository was archived by the owner on Aug 21, 2025. It is now read-only.

functionalize(): make "additionally removing views" toggleable#678

Merged
bdhirsh merged 6 commits intomainfrom
functionalize_AddBackViews
Apr 19, 2022
Merged

functionalize(): make "additionally removing views" toggleable#678
bdhirsh merged 6 commits intomainfrom
functionalize_AddBackViews

Conversation

@bdhirsh
Copy link
Contributor

@bdhirsh bdhirsh commented Apr 8, 2022

This PR goes with the core-side change here: pytorch/pytorch#75302, which updates the functionalization pass to be able to turn view ops into view_copy ops. Mobile mentioned that they would like to be able to use the functionalize() transform to trace models for running on mobile, and removing view ops (cc @ZolotukhinM)

This PR updates functionalize() to be toggleable: functionalize(remove='mutations') (the default) will remove mutations but preserve views. functionalize(remove='mutations_and_views') will remove mutations, and additionally convert view operators into their corresponding view_copy operators.

Some extra stuff in the PR:

}
);
}
return tensor;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently you need to compile with the -Werror=return-type flag, or else you get silent UB if you forget to return from a non-void function (aka this bug) :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:(, we used to have -Werror turned on for all errors but it turns out PyTorch doesn't and PyTorch would introduce warnings

bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 13, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 13, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
}

Tensor _unwrap_functional_tensor(const Tensor& self) {
auto* functional = dynamic_cast<FunctionalTensorWrapper*>(self.unsafeGetTensorImpl());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for missing this the first time around :)

bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 14, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 14, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
@bdhirsh bdhirsh requested a review from zou3519 April 14, 2022 19:16
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 14, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 14, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 14, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 14, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM.

This needs a rebase and I want to bikeshed the API and defaults a little more (maybe there shouldn't be a default yet)

@bdhirsh bdhirsh force-pushed the functionalize_AddBackViews branch from 299da47 to d505afc Compare April 15, 2022 19:58
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 15, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 15, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 15, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 15, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Apr 15, 2022

Pushed some more changes. Updates:

(1) API bikeshed. I liked your suggestion of functionalize(remove='mutations|mutations_and_views'), where you get a nice error message otherwise (and we default to the functorch default which is mutations)

(2) I ended up killing the FunctionalizeAddBackViews dispatch key, and used an extra piece of TLS instead. This saves us a dispatch key.

CI will fail on this PR though until my stack lands from pytorch/pytorch#75913

bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 17, 2022
… view_copy operators"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 17, 2022
…tors"

This PR splits the functionalization codegen into 2 pieces:

(1) Vanilla functionalization will now always turn view ops into "view_copy" ops.

(2) For functorch to "reapply views underneath the pass", I added a new dispatch key, "FunctionalizeAddBackViews". I codegen a kernel to that key for every view_copy operator that just calls back into the view op. All other ops get a fallthrough kernel.

Also - the codegen will now unconditionally register CompositeImplicitAutograd kernels directly to the functionalization keys, so we "always decompose" before hitting functionalization. Otherwise, we might break and accidentally send "view" calls to the backend, if we decompose an op into a view underneath the functionalization pass.

The important changes are in `gen.py` and `gen_functionalization_type.py` - most of the other changes are just plumbing `{view}_copy` everywhere. I also updated `test_functionalization.py`, and added expecttests for the "add back views" case.

One thing about the `AddBackViews` key - right now, I add it into the TLS include set. The other option would be to try to add it directly to the tensors, but that's kind of hard: putting it on the `FunctionalTensorWrapper` doesn't help, because the functionalization pass will unwrap when it calls back into the dispatcher, and run on the "inner tensor" (maybe we could modify the inner tensor's keyset and add the `AddBackViews` key when functionalization happens, instead?)

I also have an accompanying functorch change here: pytorch/functorch#678


Differential Revision: [D35419652](https://our.internmc.facebook.com/intern/diff/D35419652)

[ghstack-poisoned]
@bdhirsh bdhirsh changed the title functionalize() move AddBackViews logic to a separate key functionalize(): make "additionally removing views" toggleable Apr 17, 2022
Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, you probably want to change the PR body (it still mentions FunctionalizeAddBackViews). Also it would be good to wait until the CI turns green

@bdhirsh bdhirsh force-pushed the functionalize_AddBackViews branch from d505afc to 2660ec9 Compare April 18, 2022 20:19
@bdhirsh bdhirsh force-pushed the functionalize_AddBackViews branch from 2660ec9 to fb4ea78 Compare April 18, 2022 22:23
@zou3519
Copy link
Contributor

zou3519 commented Apr 18, 2022

@bdhirsh btw, functorch CI runs off of the PyTorch nightly binary, so we have two options:

  1. Wait until your pytorch-side change makes it to PyTorch nightlies (hopefully this will happen tonight)
  2. Yolo merge this and temporarily break functorch CI

@bdhirsh
Copy link
Contributor Author

bdhirsh commented Apr 18, 2022

Lol you probably saw me blindly kicking off CI again hoping something would happen (although at least python test/test_eager_transforms.py passes for me locally).

I'm ok with either, depending on how urgently @ZolotukhinM would like this change for mobile

@bdhirsh bdhirsh force-pushed the functionalize_AddBackViews branch from fb4ea78 to 4d72d1b Compare April 19, 2022 12:54
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Apr 19, 2022

CI is red, but I think I'm seeing the same set of test failures on main: https://app.circleci.com/pipelines/github/pytorch/functorch/2432/workflows/4657bb2a-cba6-4ce4-b409-e72e5229e3c4/jobs/14516/tests

@zou3519
Copy link
Contributor

zou3519 commented Apr 19, 2022

@bdhirsh feel free to merge, the errors look pre-existing and we'll figure them out as we go along today

@bdhirsh bdhirsh merged commit d041937 into main Apr 19, 2022
zou3519 pushed a commit to zou3519/pytorch that referenced this pull request Jul 20, 2022
…eable (pytorch/functorch#678)

* functionalize() move AddBackViews logic to a separate key

* make functionalize() toggleable when adding back views

* fix unnecessary view reapply, add tests for out=

* fix

* change functionalize() API, also use the new internal TLS

* rebase and fix tests
bigfootjon pushed a commit to pytorch/pytorch that referenced this pull request Jul 21, 2022
…eable (pytorch/functorch#678)

* functionalize() move AddBackViews logic to a separate key

* make functionalize() toggleable when adding back views

* fix unnecessary view reapply, add tests for out=

* fix

* change functionalize() API, also use the new internal TLS

* rebase and fix tests
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants