Skip to content

Move expand/expandAs code to backends#1353

Merged
soumith merged 9 commits intopytorch:masterfrom
killeent:expand
Apr 28, 2017
Merged

Move expand/expandAs code to backends#1353
soumith merged 9 commits intopytorch:masterfrom
killeent:expand

Conversation

@killeent
Copy link
Contributor

As title. Addresses #1312. The implementation matches similar methods such as view/viewAs, which internally calls a function newView that returns a newly allocated Tensor backed by the same storage.

Test Plan: Run unit tests.

* move TopK to generic

* partial genericization of kernel code

* introduce TopKTypeConfig, specialize radix type and conversion for floats

* implement topk for byte tensor

* implement for char tensor

* implement for int tensor, extend test to check indices as well

* works for longs too

* make bitfield set/get a struct, add support for 64-bit types

* extend to double tensor

* implement for half tensor

* asserts; test fix
@ngimel
Copy link
Collaborator

ngimel commented Apr 25, 2017

I think it would be better to have a helper function calculating expanded sizes and strides in TH, that is called from both TH and THC (similar to view, https://github.com/pytorch/pytorch/blob/master/torch/lib/THC/generic/THCTensor.c#L231), rather than have 2 almost verbatim similar TH(C)Tensor_(newExpand) doing that in TH and THC.

@gchanan
Copy link
Contributor

gchanan commented Apr 25, 2017

@ngimel good suggestion. One other point that @apaszke made previously is that this can be done more efficiently in a single loop. I'm implementing something like this for two operands to support efficient numpy style broadcast (see gchanan@63711ee as a POC). Since we probably want to share these implementations as much as possible, and it's not clear exactly what that means yet with two operands, I'd suggest we either just get this in and I'll do the work to unify everything or @kileeent and I work this out in a separate branch. I'd have a preference for the former given broadcast could take some work (need to decide for each function whether it's supported, update documentation, etc), but I'm happy either way.

return tensor

def expand(self, *sizes):
"""Returns a new view of the tensor with singleton dimensions expanded

This comment was marked as off-topic.

This comment was marked as off-topic.

@killeent
Copy link
Contributor Author

Per discussion offline with @gchanan, to unblock him on broadcast semantics, I will go ahead and fix the duplicate code, but leave any other optimizations out. He will implement these in a subsequent diff.

@soumith soumith merged commit 77035d1 into pytorch:master Apr 28, 2017
eqy pushed a commit to eqy/pytorch that referenced this pull request Jan 20, 2022
* Refactor War Sync Insertion Pass (pytorch#1339)
* Remove kir::Expr::scope_ (pytorch#1341)
* Fusion IR Refactor (pytorch#1343)
* Refactor KIR Step 1 - Remove kir::Node (pytorch#1347)
* Refactor KIR Step 2 - TMP IrUtils change (pytorch#1348)
* Refactor KIR Step 3 - Remove kir::Expr and kir::Val. (pytorch#1349)
* Refactor KIR Step 4 - Remove kir::Bool,Double,Int,NamedScalar. (pytorch#1350)
* Refactor KIR Step 5 - Remove kir::IterDomain/TensorDomain/TensorView (pytorch#1351)
* Refactor KIR Step 6 - Remove 
 kir::UnaryOp/BinaryOp/TernaryOp/ReductionOp/WelfordOp/BroadcastOp. (pytorch#1352)
* Refactor KIR Step 7 - Remove kir dispatch (pytorch#1353)
* Refactor KIR Step 8 - Clean up lower_utils (pytorch#1355)
* Refactor KIR Step 9 - lower_utils ir_utils::applyReplacements. (pytorch#1354)
* Refactor KIR Step 10 - Remove kir_printer in favor of io_stream (pytorch#1356)
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
* bump version

* add guard

* fix the cond
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants