Allow indexing tensors with both CPU and CUDA tensors#5583
Allow indexing tensors with both CPU and CUDA tensors#5583ezyang merged 2 commits intopytorch:masterfrom
Conversation
| import re | ||
| import unittest | ||
| from itertools import repeat | ||
| import random |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Do we want to add it to improve compatibility with CPU scalars? I don't think it's a good idea to support cross-device indexing with non-scalar tensors, since we generally disallow such operations (they are very expensive!). |
|
@apaszke I think this is fixing a regression, as I believe we used to support indexing CUDA tensors with CPU tensors |
|
@pytorchbot retest this please |
|
I think this is the right strategy. The indexing operations allow many different types to be used as indices. For example, you can index a CUDA tensor with a Python list or NumPy array. It seems to follow that you should be able to use a PyTorch CPU tensor as well. |
|
@pytorchbot retest this please |
|
@zou3519 seems that there is intermittent test error on Windows: https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-win-ws2016-cuda9-cudnn7-py3-test/2470//console |
|
@yf225 I'm taking a look |
* Allow indexing tensors with both CPU and CUDA tensors * Remove stray import
This copies
indicesto the same device assrcand then performs the indexing.cc @colesbury
Test Plan
code reading
Unit test (it's a sanity test and probably doesn't hit any edge cases, but I'm not sure what those look like)