Conversation
| Val* expanded_extent = nullptr; | ||
| Val** shape_extent = &extent; | ||
| if (!expanded_.empty()) { | ||
| is_expanded = expanded_[i]; |
There was a problem hiding this comment.
Just in case, please use expanded_.at(i) to avoid silent memory errors
| if (i == -1) { | ||
| shape_.emplace_back(IrBuilder::create<Int>()); | ||
| } else if (i == 1) { | ||
| shape_.emplace_back(FusionGuard::getCurFusion()->oneVal()); |
There was a problem hiding this comment.
Is there any specific reason to use the one val? Do we also want to use the zero val?
There was a problem hiding this comment.
Just trying to save an instance creation; I don't think it will make much difference here. And yes, we should also use zero val, because why not.
| domain[i] = | ||
| IterDomainBuilder(FusionGuard::getCurFusion()->zeroVal(), shape_[i]) | ||
| .build(); | ||
| *shape_extent = shape_[i]; |
|
Contiguity and expanded broadcasts are still making me feel uneasy as I've yet got a clear idea of what they mean in PyTorch. Can you please explain why expanded domains and their left domains should never be contiguous? |
|
@naoyam For example, if you have a contiguous tensor of shape |
Thanks for the explanation. Can you please add this as a comment to the code? Maybe at |
|
@naoyam I have resolved all review comments |
TensorViewBuilderable to generate expanded tensor.