[MTIA-T][CFF] Pass backend parameter into GPU vertical pass file and pattern matcher#160404
[MTIA-T][CFF] Pass backend parameter into GPU vertical pass file and pattern matcher#160404hpnhxxwn wants to merge 1 commit intopytorch:mainfrom
Conversation
|
This appears to be a diff that was exported from phabricator, but the PR author does not have sufficient permissions to run CI. @hpnhxxwn, please do step 2 of internal wiki to get write access so you do not need to get CI approvals in the future. If you think this is a mistake, please contact the Pytorch Dev Infra team. |
|
|
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160404
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2681dc8 with merge base 87e6c40 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
|
/easycla |
|
lgtm! please fix lint by "lintrunner -a" |
924e764 to
0c8e5ec
Compare
…pattern matcher (pytorch#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: BoyuanFeng Differential Revision: D80069072
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: BoyuanFeng Differential Revision: D80069072
0c8e5ec to
a325ace
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: BoyuanFeng Differential Revision: D80069072
a325ace to
2c59039
Compare
…pattern matcher (pytorch#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: BoyuanFeng Differential Revision: D80069072
ae6f3f9 to
4cbce9f
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: BoyuanFeng Differential Revision: D80069072
4cbce9f to
bcfe32d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: BoyuanFeng Differential Revision: D80069072
03fac1b to
1b0e75b
Compare
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: optimisea, BoyuanFeng Differential Revision: D80069072
144511c to
055b5c1
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: optimisea, BoyuanFeng Differential Revision: D80069072
055b5c1 to
cb5c211
Compare
|
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
cb5c211 to
b1262e6
Compare
…pattern matcher (pytorch#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: optimisea, BoyuanFeng Differential Revision: D80069072
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: optimisea, BoyuanFeng Differential Revision: D80069072
b1262e6 to
25bf2c2
Compare
…pattern matcher (pytorch#160404) Summary: Pull Request resolved: pytorch#160404 As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Reviewed By: optimisea, BoyuanFeng Differential Revision: D80069072
|
This pull request was exported from Phabricator. Differential Revision: D80069072 |
25bf2c2 to
2681dc8
Compare
|
@pytorchbot label "topic: not user facing" |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…pattern matcher (#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Differential Revision: D80069072 Pull Request resolved: #160404 Approved by: https://github.com/BoyuanFeng
…pattern matcher (#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Differential Revision: D80069072 Pull Request resolved: #160404 Approved by: https://github.com/BoyuanFeng
…pattern matcher (pytorch#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Differential Revision: D80069072 Pull Request resolved: pytorch#160404 Approved by: https://github.com/BoyuanFeng
…pattern matcher (pytorch#160404) Summary: As titled Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904 Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend. The env var default value is "inductor", so nothing should break for GPU. Test Plan: Default is always "inductor", so existing test should not break. CI tests Rollback Plan: Differential Revision: D80069072 Pull Request resolved: pytorch#160404 Approved by: https://github.com/BoyuanFeng
Summary:
As titled
Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904
Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend.
The env var default value is "inductor", so nothing should break for GPU.
Test Plan:
Default is always "inductor", so existing test should not break.
CI tests
Rollback Plan:
Differential Revision: D80069072
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben