Add information about codegen in codegen/xla_native_functions.yaml#8817
Add information about codegen in codegen/xla_native_functions.yaml#8817
Conversation
|
@tengyifei @ysiraichi my knowledge on the other forms of generation ( |
ysiraichi
left a comment
There was a problem hiding this comment.
This looks nice. Besides the comments, I have 2 suggestions:
- Add
symintandautograddescriptions - Mention what gets generated in which file
Let me know if you need help on gathering this information.
@ysiraichi I haven't been able to find a great definite source on these which is why I haven't written about them. If you have links, I would appreciate it. |
| - triu | ||
| - trunc | ||
| # ir_gen: Operations in ir_gen generate only the declaration of their respective | ||
| # lazy IR node classes. Their native functions are not generated. |
There was a problem hiding this comment.
Double checking my understanding, this means ir_gen doesn't generate native functions, but supported does generate native functions? Or is there a difference between native function declaration and native function definition (i.e. implementation)?
There was a problem hiding this comment.
this means
ir_gendoesn't generate native functions, butsupporteddoes generate native functions?
Yes.
Or is there a difference between native function declaration and native function definition?
ir_gen generates the lazy IR node class declaration. Some of the class' functions still have to be manually written (e.g. XXX::Lower()).
supported generates the XLANativeFunction::XXX declaration + registration to the XLA dispatch-key. We still have to manually write the definition of that function.
There was a problem hiding this comment.
Are there any recommendations of something to add here from this conversation?
There was a problem hiding this comment.
Let's include Yukio's explanation into the yaml docs
There was a problem hiding this comment.
I have added the explanations from this comment
|
Here's a small summary about each component inside
Note: this is my own understanding by looking at the source code. |
I have added a description to symint based on this and #3978. Please let me know what you think of it @ysiraichi |
ysiraichi
left a comment
There was a problem hiding this comment.
I think it would be nice to write down what's getting generated in which files. And, as @tengyifei mentioned, what's needed for finishing the lowering.
tengyifei
left a comment
There was a problem hiding this comment.
LGTM modulo the open comments
I think this information should be in https://github.com/pytorch/xla/blob/master/docs/source/contribute/op_lowering.md and https://github.com/pytorch/xla/blob/master/docs/source/contribute/codegen_migration.md. I would recommend link to those documents, and then supplementing them if we believe there are gaps. We could create an issue to add more information to https://github.com/pytorch/xla/blob/master/docs/source/contribute/op_lowering.md on |
|
I have added links to the referred docs in the head of the file. |
Add some information on
codegenandsupportedconfigs based on #8713.Info on generation also comes from https://github.com/pytorch/pytorch/blob/main/torchgen/gen_lazy_tensor.py.