[NVFuser] Upstream push 0811#83239
Conversation
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
🔗 Helpful links
❌ 1 New Failures, 8 PendingAs of commit 0e04196 (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
|
@jjsjann123 I see a few build errors, can you take a look? |
Already on them~ Will update you once I got them patched |
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
|
hmm. error here doesn't seem to be caused by our update: https://github.com/pytorch/pytorch/runs/7796124160?check_suite_focus=true I'll try my luck with rebase... |
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
|
Don't see any obvious fix in the viable/strict history. Still rebased 🤞 |
is this the failure? |
Ha, file extension... great catch 🙇 |
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
|
errr... there's actually a kernel in that file and we do want nvcc.... we need to change the build... I'm patching it now. |
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
|
@davidberard98 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
|
@pytorchbot successfully started a merge job. Check the current status here. |
|
Hey @jjsjann123. |
Summary: Pull Request resolved: #83239 Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435af Transpose scheduler, step 1 (#1854) 8a45dbf Add an example on how to manually schedule transpose (#1889) 83dbf56 Patch dropout fix (#1898) 69d3519 Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c4 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d0 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser Test Plan: Imported from OSS Reviewed By: qihqi Differential Revision: D38657074 Pulled By: davidberard98 fbshipit-source-id: b306eecb7df8e24c06b055fc9e1b11b8dcd1a0ea
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
cherry-pick upstream CI fixes from pytorch#83067 & pytorch#83239
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser [ghstack-poisoned]
Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser Pull Request resolved: #83857 Approved by: https://github.com/xuzhao9
Summary: Seeing an OOM in #83239, this would help understand whether the issue is with the infra or with the test. RUN_TORCHBENCH: nvfuser Pull Request resolved: #83857 Approved by: https://github.com/xuzhao9 Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/71d99662a0d7f8a9ad68999c9a014b71591cbb68 Reviewed By: mehtanirav Differential Revision: D39172015 Pulled By: davidberard98 fbshipit-source-id: 208f7d8bf00937a459bb5abd5baf9461660d19c3
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` b7435afcd22c917713c2f41a7237bc26e1183f14 Transpose scheduler, step 1 (#1854) 8a45dbf72034684eb8e18b1835b533e90b68f184 Add an example on how to manually schedule transpose (#1889) 83dbf56a9554b2efbd5416461d938fff477b0b27 Patch dropout fix (#1898) 69d3519a532250719b1aa8341b50e067b181b42d Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 15091c488e96343bdc49e3990acbf238a3b3da51 Rework RNG to correctly support broadcasted dropout (#1888) aafe2d048aaac596e503596a41303423619f3954 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D38657074](https://our.internmc.facebook.com/intern/diff/D38657074) Pull Request resolved: pytorch/pytorch#83239 Approved by: https://github.com/davidberard98
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ Code changes includes: - codegen improvements: 1. double support in expression evaluator - bug fixes: 1. dropout fix - rework RNG to support broadcasted dropout (Fixes #82784) 2. expand fix - Patch expand+reduction, expand+view, rework view analysis and guard - scheduler: 1. manual transpose schedule example 2. WIP transpose scheduler Commits that's in this PR from the devel branch: ``` e31d69f Transpose scheduler, step 1 (#1854) 269b4af Add an example on how to manually schedule transpose (#1889) 6ecf4d8 Patch dropout fix (#1898) ecae77a Expand+Reduction, Expand+View support, rework View analysis and guards (#1883) 4e6c0c1 Rework RNG to correctly support broadcasted dropout (#1888) ae37ca1 Make ExpressionEvaluator support Double (#1885) ``` RUN_TORCHBENCH: nvfuser Differential Revision: [D38657074](https://our.internmc.facebook.com/intern/diff/D38657074) Pull Request resolved: pytorch/pytorch#83239 Approved by: https://github.com/davidberard98
Stack from ghstack (oldest at bottom):
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/
Code changes includes:
Commits that's in this PR from the devel branch:
RUN_TORCHBENCH: nvfuser
Differential Revision: D38657074