Skip to content

Revert "add keyword out for autograd function Concat to match torch.cat"#1340

Merged
soumith merged 1 commit intomasterfrom
revert-1336-master
Apr 23, 2017
Merged

Revert "add keyword out for autograd function Concat to match torch.cat"#1340
soumith merged 1 commit intomasterfrom
revert-1336-master

Conversation

@apaszke
Copy link
Copy Markdown
Contributor

@apaszke apaszke commented Apr 23, 2017

Reverts #1336.

Variables should never have out arguments.

@soumith soumith merged commit 6a69f70 into master Apr 23, 2017
@soumith soumith deleted the revert-1336-master branch April 23, 2017 17:19
@soumith
Copy link
Copy Markdown
Collaborator

soumith commented Apr 23, 2017

oh, whoops i guess.

@colesbury
Copy link
Copy Markdown
Member

We still need to fix torch.stack() on variables:

return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim, out=out)

eqy pushed a commit to eqy/pytorch that referenced this pull request Jan 20, 2022
Fixing a few smaller issues here and there:

Exposing python API to switch single node fusion;
Exposing python API to switch horizontal fusion (Needed to avoid PW scheduler failure on fusion with outputs of different shapes/ranks);
Adding shape expression short-cut support for native_dropout (Bug reported by AOTAutograd);
Fixing device check to avoid fusion of node with inputs on different device. Long term we should have supported this, but disabling it for now to avoid assert. (e.g. scalar cpu tensor can be operated on cuda tensors, feature from TensorIterator).
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants