Add numpy.array-like type inference to torch.tensor.#5997
Add numpy.array-like type inference to torch.tensor.#5997gchanan merged 13 commits intopytorch:masterfrom
Conversation
|
open questions:
|
|
@pytorchbot retest this please. |
|
Latest commit treats python floats as the default (scalar) dtype instead of doubles. |
|
For 2. above, we now have tests that exercise cuda type inference. Also, I changed the promoteTypes logic to favor float32 over float64 with the combination (float32, int32) and (float32, int64); I believe @fmassa and @apaszke discussed this before. The particular reason in this case is so that:
|
|
@gchanan I think it's better like this. Plus, current |
| new_with_type_conversion(type, var, device); | ||
| } | ||
| const auto& type_to_use = type_inference ? var.type() : type; | ||
| return copy_variables ? new_with_tensor_copy(type_to_use, var, device) : |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| /* i8 */ { i8, i8, i8, i8, i8, f8, f4, f8, ud }, | ||
| /* f2 */ { f2, f2, f4, f8, f8, f2, f4, f8, ud }, | ||
| /* f4 */ { f4, f4, f4, f8, f8, f4, f4, f8, ud }, | ||
| /* f4 */ { f4, f4, f4, f4, f4, f4, f4, f8, ud }, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
| #endif | ||
| if (PySequence_Check(obj)) { | ||
| ScalarType scalarType = ScalarType::NumOptions; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| // match NumPy semantics, except use default tensor type instead of double. | ||
| if (length == 0) return torch::tensor::get_default_tensor_type().scalarType(); | ||
| for (int i = 0; i < length; ++i) { | ||
| THPObjectPtr handle(PySequence_GetItem(obj, i)); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
IIUC, I think you'll also want to change a couple of the distributions.constraints.* to floats. In https://github.com/pytorch/pytorch/blob/master/torch/distributions/constraints.py#L240: - positive = _GreaterThan(0)
+ positive = _GreaterThan(0.)
...
- unit_interval = _Interval(0, 1)
+ unit_interval = _Interval(0., 1.)Note that we have separate constraints for integers. |
* Add numpy.array-like type inference to torch.tensor. * Temporary fix for int/double types. * Treat python floats as the default (scalar) dtype. * Also make 0-length sequences the default scalar type and add more tests. * Add type inference to sparse_coo_tensor. * Fix sparse test. * Remove allow_variables. * Check numpy platform bits. * Address review comments. * Make suggested changes to constraints. * More checking windows builds. * Fix test for windows.
i.e. torch.tensor(0) will create a tensor with scalar type int64, torch.tensor(0.) will create a tensor with scalar type float64.