Micro-optimisations for matmul 2.0: Electric boogaloo#75197
Closed
lezcano wants to merge 28 commits intogh/Lezcano/59/basefrom
Closed
Micro-optimisations for matmul 2.0: Electric boogaloo#75197lezcano wants to merge 28 commits intogh/Lezcano/59/basefrom
lezcano wants to merge 28 commits intogh/Lezcano/59/basefrom
Conversation
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by @ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
This was referenced Apr 4, 2022
Closed
Closed
Contributor
🔗 Helpful links
✅ No Failures (0 Pending)As of commit d34aa56 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
Apr 4, 2022
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. ghstack-source-id: 97a8759 Pull Request resolved: #75197
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
Apr 4, 2022
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. ghstack-source-id: 67b1813 Pull Request resolved: #75197
Contributor
|
should we wait for #75195 before reviewing this? |
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) For the approach taken, see #75197 (comment) Fixes #76702 ghstack-source-id: c1018a1 Pull Request resolved: #76828
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 11, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) For the approach taken, see #75197 (comment) Fixes #76702 ghstack-source-id: fc5294a Pull Request resolved: #76828
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 12, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 12, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. We add tests for this to make sure that our algorithm to detect this is accurate. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) Fixes #76702 [ghstack-poisoned]
lezcano
added a commit
that referenced
this pull request
May 12, 2022
With this PR, matmul just folds a bmm into a mm o mv if and only if it can achieve so without copying. For the cases where it was copying before see #75197 (comment) #75197 (comment) #75197 (comment) For the approach taken, see #75197 (comment) Fixes #76702 ghstack-source-id: 4952fb2 Pull Request resolved: #76828
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
ngimel
approved these changes
May 14, 2022
Collaborator
ngimel
left a comment
There was a problem hiding this comment.
Can you please rebase to get CI signal?
Collaborator
Author
|
I'm starting to think that all this |
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
This PR implements the bulk of #64387 Part of the optimisations were already merged in #72230 A number of these optimisations include: - Make the code `const` correct. - Create `DimVector`'s more efficiently (e.g. prefer `append` over `insert`). - Access sizes of the tensors via `sizes().front()` / `sizes().back()` / `sizes().end()[-2]` - Do not create intermediary tensors / vectors when it can be avoided. - Call `reshape` rather than `expect_contiguous` + `view` On top of these, it fixes a correctness issue of `matmul_out`, where the out parameter was not resized correctly when passed to the backends. This involves removing the use of `set_` from the calling code, as requested by ezyang, and it incurs on most of the complexity of the code that this PR adds. [ghstack-poisoned]
Collaborator
Author
|
@pytorchbot merge |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
This PR implements the bulk of #64387
Part of the optimisations were already merged in #72230
A number of these optimisations include:
constcorrect.DimVector's more efficiently (e.g. preferappendoverinsert).sizes().front()/sizes().back()/
sizes().end()[-2]reshaperather thanexpect_contiguous+viewOn top of these, it fixes a correctness issue of
matmul_out, where theout parameter was not resized correctly when passed to the backends.
This involves removing the use of
set_from the calling code, asrequested by @ezyang, and it incurs on most of the complexity of the
code that this PR adds.