Conversation
Allows mulitplication of e.g. numpy.float32 with tensors. This came up with pytorch#9468
|
@pytorchbot retest this please |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
looks good! Maybe add a couple of tests? |
|
I would want to test this by adding numpy float32 scalars to the test introduced in #9651, so my original plan was to revise this PR after that to avoid the merge conflict. |
test/test_torch.py
Outdated
| for np_dtype in [np.float32, np.float64, np.int32, np.int64]: | ||
| np_sc = np_dtype(2.0) | ||
| t = torch.ones(2, requires_grad=True) | ||
| r1 = t * np_sc |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
From SsnL's review. Thank you!
|
@ssnl I added a loop over float/double as input tensor and assert that output is the same dtype. Thanks for the suggestion to check result dtype. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@t-vi We probably need to rebase this PR onto current master |
|
Will do next week.
|
facebook-github-bot
left a comment
There was a problem hiding this comment.
weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
LGTM. @gchanan any other thoughts? |
test/test_torch.py
Outdated
| r2 = t * np_sc | ||
| self.assertIsInstance(r2, torch.Tensor) | ||
| self.assertTrue(r2.requires_grad) | ||
| for np_dtype in [np.float32, np.float64, np.int32, np.int64]: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/utils/python_numbers.h
Outdated
| inline bool THPUtils_checkDouble(PyObject* obj) { | ||
| #if PY_MAJOR_VERSION == 2 | ||
| return PyFloat_Check(obj) || PyLong_Check(obj) || PyInt_Check(obj); | ||
| return PyFloat_Check(obj) || PyLong_Check(obj) || PyInt_Check(obj) || torch::utils::is_numpy_scalar(obj); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/utils/python_numbers.h
Outdated
| return PyFloat_Check(obj) || PyLong_Check(obj) || PyInt_Check(obj) || torch::utils::is_numpy_scalar(obj); | ||
| #else | ||
| return PyFloat_Check(obj) || PyLong_Check(obj); | ||
| return PyFloat_Check(obj) || PyLong_Check(obj) || torch::utils::is_numpy_scalar(obj); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
And expand range of tested numpy types. Thank you, gchanan for the suggestions!
|
Is the |
|
@pytorchbot retest this please. |
|
I don't think the test failures are relevant; other PRs failed with similar errors around the same time. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Looks like comments are addressed, should we merge this? @gchanan |
resolve conflict in data parallel model * master: (201 commits) Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744) Move collapse dims into a single place (pytorch#11272) Fix some more warnings (pytorch#11257) Fix the batchnorm onnx exporting when affine=False Improve error message to include return types too (pytorch#11245) Check doxygen output in travis (pytorch#11124) Accept more numpy scalars as doubles (pytorch#9659) Fixed log message (pytorch#10874) Fix to distribution.__repr__ with lazy attributes (pytorch#11263) Add import export step to end to end tests Add complex hooks for out of tree complex implementation. (pytorch#11216) Unify opt flag for cmake codegen (pytorch#11227) nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127) Port PackedSequences functions to C++ (pytorch#11224) Treat numerical differences as warnings instead of errors when tracing (pytorch#11246) add a Float16UniformFill (pytorch#11123) Implement torch.tensordot (pytorch#10025) keep net type info when generating model complete net (pytorch#11032) Get rid of some uses of type() (pytorch#11215) Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205) ...
Summary: Allows mulitplication of e.g. numpy.float32 with tensors. This came up with pytorch#9468 If you want this and after the other patch is done, I'll add tests (but that would be conflicting, so I prefer to wait). Pull Request resolved: pytorch#9659 Differential Revision: D8948078 Pulled By: weiyangfb fbshipit-source-id: c7dcc57b63e2f100df837f70e1299395692f1a1b
Allows mulitplication of e.g. numpy.float32 with tensors.
This came up with #9468
If you want this and after the other patch is done, I'll add tests (but that would be conflicting, so I prefer to wait).