Skip to content

Delete a bunch of uses of getType in favor of TensorOptions.#11087

Closed
ezyang wants to merge 8 commits intoexport-D9578734from
export-D9581560
Closed

Delete a bunch of uses of getType in favor of TensorOptions.#11087
ezyang wants to merge 8 commits intoexport-D9578734from
export-D9581560

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Aug 30, 2018

Delete a bunch of uses of getType in favor of TensorOptions.

Differential Revision: D9581560

Stacked on #11080

Differential Revision: D9581560
Differential Version: 56526196
ezyang added 3 commits August 30, 2018 09:45
Differential Revision: D9581560
Differential Version: 56532263
Differential Revision: D9581560
Differential Version: 56532377
Differential Revision: D9581560
Differential Version: 56535272
ezyang added 2 commits August 30, 2018 10:35
Differential Revision: D9581560
Differential Version: 56537640
Differential Revision: D9581560
Differential Version: 56538492
Copy link
Contributor

@goldsborough goldsborough left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some suggestions

using namespace torch::autograd;
HANDLE_TH_ERRORS
Variable var = VariableType::getType(CPU(kByte))->tensor();
Variable var = torch::empty({0}, at::device(at::kCPU).dtype(at::kByte));

This comment was marked as off-topic.

for (auto i = 0; i < numTensors_; i++) {
deviceGuard.set_index(i % numDevices_);
inputs_[i] = type.tensor({16, 16});
inputs_[i] = at::empty({16, 16}, at::device({at::kCUDA, i % numDevices_}).dtype(at::kFloat));

This comment was marked as off-topic.

// Initialize tensor list
std::vector<at::Tensor> tensors = {
at::ones({16, 16}, at::TensorOptions(at::CPU(at::kFloat))),
at::ones({16, 16}, at::device(at::kCPU).dtype(at::kFloat)),

This comment was marked as off-topic.

for (auto i = 0; i < size; i++) {
auto tensor =
at::ones({16, 16}, at::TensorOptions(at::getType(b, at::kFloat))) * i;
at::ones({16, 16}, at::TensorOptions(b).dtype(at::kFloat)) * i;

This comment was marked as off-topic.

}
inputs[k][l] =
at::ones({16, 16}, at::TensorOptions(type)) * (k * stride + l);
at::ones({16, 16}, at::TensorOptions(b).dtype(at::kFloat)) * (k * stride + l);

This comment was marked as off-topic.

This comment was marked as off-topic.

for (auto i = 0; i < numDevices_; ++i) {
deviceGuard.set_index(i);
inputs_[i] = type.tensor({3, 3});
inputs_[i] = at::empty({3, 3}, at::device(at::kCUDA).dtype(at::kFloat));

This comment was marked as off-topic.

outputs_[i].resize(worldSize_ * numDevices_);
for (auto j = 0; j < worldSize_ * numDevices_; ++j) {
outputs_[i][j] = type.tensor({3, 3});
outputs_[i][j] = at::empty({3, 3}, at::device(at::kCUDA).dtype(at::kFloat));

This comment was marked as off-topic.

HANDLE_TH_ERRORS
THGenerator *generator = THPGenerator_TH_CData(self);
Variable var = VariableType::getType(CPU(kByte))->tensor();
Variable var = torch::empty({0}, at::device(at::kCPU).dtype(at::kByte));

This comment was marked as off-topic.

This comment was marked as off-topic.


auto x = torch::ones(
{5, 5}, torch::getType(torch::Backend::CPU, static_cast<torch::Dtype>(i)));
{5, 5}, at::device(at::kCPU).dtype(static_cast<torch::Dtype>(i)));

This comment was marked as off-topic.

if(at::hasCUDA()) {
auto & CUDAFloat = C.getType(Backend::CUDA,ScalarType::Float);
auto t2 = zeros({4,4}, CUDAFloat);
auto t2 = zeros({4,4}, at::device(at::kCUDA).dtype(at::kFloat));

This comment was marked as off-topic.

ezyang added 2 commits August 30, 2018 11:32
Differential Revision: D9581560
Differential Version: 56544238
Differential Revision: D9581560
Differential Version: 56544398
if(at::hasCUDA()) {
auto & CUDAFloat = C.getType(Backend::CUDA,ScalarType::Float);
auto t2 = zeros({4,4}, CUDAFloat);
auto t2 = zeros({4,4}, at::kCUDA);

This comment was marked as off-topic.

This comment was marked as off-topic.

at::DeviceGuard deviceGuard;
for (auto l = 0; l < stride; l++) {
if (type.is_cuda()) {
if (b == at::Backend::CUDA) { // NB:wouldn't work with sparse

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Aug 31, 2018
Summary: Pull Request resolved: pytorch/pytorch#11087

Reviewed By: cpuhrsch

Differential Revision: D9581560

fbshipit-source-id: ebe3c4c0956da8a7215ada287bf6526dbcb2b07d
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary: Pull Request resolved: pytorch#11087

Reviewed By: cpuhrsch

Differential Revision: D9581560

fbshipit-source-id: ebe3c4c0956da8a7215ada287bf6526dbcb2b07d
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants