Conversation
5ba8665 to
288dd03
Compare
bdhirsh
commented
Jun 29, 2021
| PT_INC_DIR="$PTDIR/build/aten/src/ATen" | ||
| fi | ||
|
|
||
| set -e |
Contributor
Author
There was a problem hiding this comment.
The new impl_path argument is to the codegen can read in the kernel signatures and figure out if any are missing.
I also had to add set -e to ensure that the build actually stops if codegen fails, instead of just continuing. Since that would obscure the codegen error and you'd end up with the same linker error. I'm not sure if the original xla codegen suffered from that same issue though.
bdhirsh
added a commit
to pytorch/pytorch
that referenced
this pull request
Jul 6, 2021
… better compiler error messages" Turns external backend kernels into class methods, so they get helpful compiler errors instead of linker errors whenever they're a schema mismatch. I took a stab at trying to do the same for in-tree kernels, but gave up after a while. It would probably make sense to come back to it with @wenleix 's nice set of regex calls, to automatically pick up all of the native kernels in the `aten/src/ATen/native` folder. Corresponding xla PR: pytorch/xla#3012 Differential Revision: [D29047680](https://our.internmc.facebook.com/intern/diff/D29047680) [ghstack-poisoned]
bdhirsh
added a commit
to pytorch/pytorch
that referenced
this pull request
Jul 6, 2021
… error messages" Turns external backend kernels into class methods, so they get helpful compiler errors instead of linker errors whenever they're a schema mismatch. I took a stab at trying to do the same for in-tree kernels, but gave up after a while. It would probably make sense to come back to it with @wenleix 's nice set of regex calls, to automatically pick up all of the native kernels in the `aten/src/ATen/native` folder. Corresponding xla PR: pytorch/xla#3012 Differential Revision: [D29047680](https://our.internmc.facebook.com/intern/diff/D29047680) [ghstack-poisoned]
…e kernels when possible
…e kernels when possible
…odegen to stop build on error
8c0cf2f to
b0736e8
Compare
ailzhang
approved these changes
Jul 7, 2021
Collaborator
|
Merge this to fix #3033 |
miladm
added a commit
that referenced
this pull request
Jul 9, 2021
miladm
added a commit
that referenced
this pull request
Jul 9, 2021
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Accompanying pytorch change: pytorch/pytorch#59839
This PR provides better error messages when an XLA kernel has a schema mismatch (compiler error through class method), or is missing entirely (codegen will notice and provide an error).
It's also accompanied by a pytorch-side change that reads in the file containing kernel definitions (
aten_xla_type.cpp) and figures out based on the names if any are missing. That way we fully avoid linker errors:xla_native_functions.yaml, but you're just missing the definition entirely fromaten_xla_type.cpp, the codegen can pick up on this and error out early.