Add cpu() and cuda() implementations to ATen#8835
Add cpu() and cuda() implementations to ATen#8835ezyang wants to merge 1 commit intopytorch:masterfrom
Conversation
- to() is now correctly marked as this-const - As native functions, cpu/cuda get automatically bound. I didn't see a convenient way to maintain the 'async' kwarg, so it got dropped for now. Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
We should make python |
goldsborough
left a comment
There was a problem hiding this comment.
Shouldn't we wait to change the Python binding until the next release? Maybe keep THPVariable_cuda, but call self_.to(...) inside it, after the PythonArgParser took care of handling the deprecated async kwarg.
Also, this PR actually changes the semantics of the device on which the new tensor is allocated. See my comment below. I feel like I worked on this method when I did the big TensorOptions change and found that the nasty logic inside dispatch_type_conversion made things hard.
| template <typename T> | ||
| Tensor tensor_cuda(ArrayRef<T> values, const TensorOptions& options) { | ||
| auto cpu_tensor = tensor_cpu(values, TensorOptions(options).device(at::kCPU)); | ||
| auto cpu_tensor = tensor_cpu(values, TensorOptions(options).device(Device(kCPU))); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| - func: cpu(Tensor self) -> Tensor | ||
| variants: method | ||
|
|
||
| # TODO: Would be preferable if this were an at::optional<int64_t> device |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| Tensor to(ScalarType dtype, bool non_blocking = false); | ||
| Tensor to(Device device, bool non_blocking = false); | ||
| /// NB: these are defined in TensorOptions.h | ||
| Tensor to(Device device, ScalarType dtype, bool non_blocking = false) const; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| if (!r.isNone(0) && device_obj.is_cpu()) { | ||
| throw std::runtime_error("Invalid device, must be cuda device"); | ||
| } | ||
| int32_t device_index = -1; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
| } | ||
|
|
||
| // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
|
|
||
| Tensor cuda(const Tensor& self, int64_t device_id, bool non_blocking) { | ||
| if (device_id == -1) { // default argument |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| return self.to(kCPU); | ||
| } | ||
|
|
||
| Tensor cuda(const Tensor& self, int64_t device_id, bool non_blocking) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Abandoning this patch, it's blocked on |
|
@goldsborough should we move dispatch_type_conversion into ATen? |
|
@gchanan yes I think more of such logic should move into ATen. |
see a convenient way to maintain the 'async' kwarg, so it got
dropped for now. This probably has to be fixed before we can merge this.
Signed-off-by: Edward Z. Yang ezyang@fb.com
CC @goldsborough