Conversation
133a488 to
0dd4eee
Compare
|
Could you also update https://github.com/pytorch/xla/blob/master/OP_LOWERING_GUIDE.md to reflect this new yaml file? |
The operator names come from the in-tree file In general you can look at I'll also make sure that's all in the documentation!
Yep! I'll let you know when I've updated it so you can take a look at the changes |
|
updated the docs. I'll need to update the docs more in a later PR, e.g. when I change the file names. I also unpinned this PR from my feature branch, so CI will keep failing until my other PR merges to master |
afa237f to
3addce5
Compare
I fixed `RegistrationDeclarations.yaml` in a previous PR to account for codegen'd composite kernels, but I forgot to make the corresponding change in the new external codegen. This change needs to land before pytorch/xla#2898. I also moved the function out of `gen.py`, since RegistrationDeclarations.yaml isn't really the main use case for the function anymore- we want to kill it eventually. I also just tried leaving the function in `gen.py` initially and calling `from tools.codegen.gen import has_autogenerated_composite_kernel`. But python didn't like that, and I couldn't figure out why. As a quick attempt to debug, I printed `import tools.codegen; dir(tools.codegen)`, and for some reason `gen` wasn't showing up in the list, even though other subfolders were. This doesn't matter too much though, since I think moving the function out of `gen.py` is the right move anyway. Differential Revision: [D28012667](https://our.internmc.facebook.com/intern/diff/D28012667) [ghstack-poisoned]
…omposite kernels" I fixed `RegistrationDeclarations.yaml` in a previous PR to account for codegen'd composite kernels, but I forgot to make the corresponding change in the new external codegen. This change needs to land before pytorch/xla#2898. I also moved the function out of `gen.py`, since RegistrationDeclarations.yaml isn't really the main use case for the function anymore- we want to kill it eventually. I also just tried leaving the function in `gen.py` initially and calling `from tools.codegen.gen import has_autogenerated_composite_kernel`. But python didn't like that, and I couldn't figure out why. As a quick attempt to debug, I printed `import tools.codegen; dir(tools.codegen)`, and for some reason `gen` wasn't showing up in the list, even though other subfolders were. This doesn't matter too much though, since I think moving the function out of `gen.py` is the right move anyway. Differential Revision: [D28012667](https://our.internmc.facebook.com/intern/diff/D28012667) [ghstack-poisoned]
I fixed `RegistrationDeclarations.yaml` in a previous PR to account for codegen'd composite kernels, but I forgot to make the corresponding change in the new external codegen. This change needs to land before pytorch/xla#2898. I also moved the function out of `gen.py`, since RegistrationDeclarations.yaml isn't really the main use case for the function anymore- we want to kill it eventually. I also just tried leaving the function in `gen.py` initially and calling `from tools.codegen.gen import has_autogenerated_composite_kernel`. But python didn't like that, and I couldn't figure out why. As a quick attempt to debug, I printed `import tools.codegen; dir(tools.codegen)`, and for some reason `gen` wasn't showing up in the list, even though other subfolders were. This doesn't matter too much though, since I think moving the function out of `gen.py` is the right move anyway. Differential Revision: [D28012667](https://our.internmc.facebook.com/intern/diff/D28012667) [ghstack-poisoned]
3addce5 to
664bee1
Compare
664bee1 to
2639e8d
Compare
This PR updates the build process to use in-tree pytorch codegen, which is getting merged here: pytorch/pytorch#56601 (more details on the codegen are in that PR). Doing so requires a new yaml file,
xla_native_functions.yaml.I also updated the docs to reflect the codegen change. I'm going to save deleting the existing codegen logic for a future PR