[ExecuTorch] Preserve undelegated Linear ops in Llama demo export#5244
[ExecuTorch] Preserve undelegated Linear ops in Llama demo export#5244swolchok wants to merge 6 commits intogh/swolchok/53/basefrom
Conversation
Allows us to use optimized op_linear from the previous diff. Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5244
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit faccc34 with merge base 3171ede ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D62262532 |
… export" Allows us to use optimized op_linear from the previous diff. Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/) [ghstack-poisoned]
Pull Request resolved: #5244 Allows us to use optimized op_linear from the previous diff. ghstack-source-id: 242001322 @exported-using-ghexport Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/)
|
This pull request was exported from Phabricator. Differential Revision: D62262532 |
… export" Allows us to use optimized op_linear from the previous diff. Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/) [ghstack-poisoned]
Pull Request resolved: #5244 Allows us to use optimized op_linear from the previous diff. ghstack-source-id: 242004125 @exported-using-ghexport Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/)
|
This pull request was exported from Phabricator. Differential Revision: D62262532 |
… export" Allows us to use optimized op_linear from the previous diff. Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/) [ghstack-poisoned]
Pull Request resolved: #5244 Allows us to use optimized op_linear from the previous diff. ghstack-source-id: 242011022 @exported-using-ghexport Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/)
|
This pull request was exported from Phabricator. Differential Revision: D62262532 |
… export" Allows us to use optimized op_linear from the previous diff. Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/) [ghstack-poisoned]
Pull Request resolved: #5244 Allows us to use optimized op_linear from the previous diff. ghstack-source-id: 242013335 @exported-using-ghexport Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/)
|
This pull request was exported from Phabricator. Differential Revision: D62262532 |
… export" Allows us to use optimized op_linear from the previous diff. Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/) [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D62262532 |
Pull Request resolved: #5244 Allows us to use optimized op_linear from the previous diff. ghstack-source-id: 242061030 @exported-using-ghexport Differential Revision: [D62262532](https://our.internmc.facebook.com/intern/diff/D62262532/)
|
This pull request has been merged in 12a25c6. |
Summary: pytorch#5244 probably broke it, because it makes optimized ops a requirement to run llama without xnnpack. Test Plan: `bash .ci/scripts/test_model.sh llama2 cmake portable` was broken and now succeeds
Stack from ghstack (oldest at bottom):
Allows us to use optimized op_linear from the previous diff.
Differential Revision: D62262532