Skip to content

Add cpu() and cuda() implementations to ATen#8835

Closed
ezyang wants to merge 1 commit intopytorch:masterfrom
ezyang:pr/cpu-cuda-methods
Closed

Add cpu() and cuda() implementations to ATen#8835
ezyang wants to merge 1 commit intopytorch:masterfrom
ezyang:pr/cpu-cuda-methods

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Jun 24, 2018

  • to() is now correctly marked as this-const
  • As native functions, cpu/cuda get automatically bound. I didn't
    see a convenient way to maintain the 'async' kwarg, so it got
    dropped for now. This probably has to be fixed before we can merge this.

Signed-off-by: Edward Z. Yang ezyang@fb.com

CC @goldsborough

- to() is now correctly marked as this-const
- As native functions, cpu/cuda get automatically bound.  I didn't
  see a convenient way to maintain the 'async' kwarg, so it got
  dropped for now.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
@ssnl
Copy link
Collaborator

ssnl commented Jun 24, 2018

We should make python .to (currently called via here) and ATen .to go through the same code path one day.

Copy link
Contributor

@goldsborough goldsborough left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we wait to change the Python binding until the next release? Maybe keep THPVariable_cuda, but call self_.to(...) inside it, after the PythonArgParser took care of handling the deprecated async kwarg.

Also, this PR actually changes the semantics of the device on which the new tensor is allocated. See my comment below. I feel like I worked on this method when I did the big TensorOptions change and found that the nasty logic inside dispatch_type_conversion made things hard.

template <typename T>
Tensor tensor_cuda(ArrayRef<T> values, const TensorOptions& options) {
auto cpu_tensor = tensor_cpu(values, TensorOptions(options).device(at::kCPU));
auto cpu_tensor = tensor_cpu(values, TensorOptions(options).device(Device(kCPU)));

This comment was marked as off-topic.

- func: cpu(Tensor self) -> Tensor
variants: method

# TODO: Would be preferable if this were an at::optional<int64_t> device

This comment was marked as off-topic.

This comment was marked as off-topic.

Tensor to(ScalarType dtype, bool non_blocking = false);
Tensor to(Device device, bool non_blocking = false);
/// NB: these are defined in TensorOptions.h
Tensor to(Device device, ScalarType dtype, bool non_blocking = false) const;

This comment was marked as off-topic.

if (!r.isNone(0) && device_obj.is_cpu()) {
throw std::runtime_error("Invalid device, must be cuda device");
}
int32_t device_index = -1;

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

}
}

// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This comment was marked as off-topic.

}

Tensor cuda(const Tensor& self, int64_t device_id, bool non_blocking) {
if (device_id == -1) { // default argument

This comment was marked as off-topic.

return self.to(kCPU);
}

Tensor cuda(const Tensor& self, int64_t device_id, bool non_blocking) {

This comment was marked as off-topic.

@ezyang
Copy link
Contributor Author

ezyang commented Jun 25, 2018

Abandoning this patch, it's blocked on at::optionalifying the API.

@gchanan
Copy link
Contributor

gchanan commented Jun 25, 2018

@goldsborough should we move dispatch_type_conversion into ATen?

@goldsborough
Copy link
Contributor

@gchanan yes I think more of such logic should move into ATen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants