Skip to content

move unbind to ATen#8587

Closed
zasdfgbnm wants to merge 10 commits intopytorch:masterfrom
zasdfgbnm:unbind
Closed

move unbind to ATen#8587
zasdfgbnm wants to merge 10 commits intopytorch:masterfrom
zasdfgbnm:unbind

Conversation

@zasdfgbnm
Copy link
Collaborator

No description provided.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please also add the function to tools/autograd/derivatives.yaml and specify that its derivative is stack? This will be much more efficient than what's currently generated by this op. Thanks!

}

std::vector<Tensor> unbind(const Tensor &self, int64_t dim) {
AT_CHECK(dim < self.dim(), "dim must be smaller than ", self.dim());

This comment was marked as off-topic.

Arguments:
tensor (Tensor): the tensor to unbind
dim (int): dimension to remove
""")

This comment was marked as off-topic.

@zasdfgbnm
Copy link
Collaborator Author

@apaszke @vishwakftw Done

}

std::vector<Tensor> to_tensor_list(const variable_list& variables) {
return fmap(variables, [](torch::autograd::Variable v) { return static_cast<Tensor>(v); } );

This comment was marked as off-topic.

tensors[i] = self.select(dim, i);
}
return tensors;
}

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@zou3519
Copy link
Contributor

zou3519 commented Jun 26, 2018

cc @apaszke how does this look now?

@zasdfgbnm
Copy link
Collaborator Author

@pytorchbot retest this please

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Can you please add grad tests for unbind (it has its own grad definition now)?

@apaszke
Copy link
Contributor

apaszke commented Jul 6, 2018

Another thing I'm wondering about is if we won't end up paying the cost of creating autograd functions for those nested selects even though the grad will later be overriden by the outer UnbindBackward. Shouldn't we enable no-grad mode in VariableType.cpp before we call into the native function? @colesbury?

@zasdfgbnm
Copy link
Collaborator Author

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 9, 2018
Summary: Pull Request resolved: pytorch/pytorch#8587

Differential Revision: D8764086

Pulled By: soumith

fbshipit-source-id: 7f311cf13c341040e1f2cf4a8f05723e32d38947
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 13, 2018
Summary: Pull Request resolved: pytorch/pytorch#8587

Differential Revision: D8764086

Pulled By: soumith

fbshipit-source-id: 7f311cf13c341040e1f2cf4a8f05723e32d38947
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary: Pull Request resolved: pytorch#8587

Differential Revision: D8764086

Pulled By: soumith

fbshipit-source-id: 7f311cf13c341040e1f2cf4a8f05723e32d38947
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants