Skip to content

float NewType infers int | float when doing an operation with Unknown #2914

@Jeremiah-England

Description

@Jeremiah-England

Summary

Float-based NewType differs from the behavior of float in that operations with Unknown type yield int | float instead of Unknown. Generally, this is fine, but it messes things up when working with floats and Numpy arrays.

from typing import NewType, reveal_type

T = NewType("T", float)


d = {} | {"one": 1}  # Just a way to create a dict[Unknown, Unknown] and breaking type inference on the "one" key.

reveal_type(d)  # dict[Unknown, Unknown]
reveal_type(d["one"])  # Unknown
reveal_type(1.0 * d["one"])  # Unknown (good)
reveal_type(T(1.0) * d["one"])  # int | float (Err!)


# In most cases, the inference above is harmless.
# However, when working with Numpy arrays, it leads to incorrect types.

import numpy as np

arr = np.array([1, 2, 3, 4, 5])
reveal_type(T(1.0) + arr)  # Unknown
reveal_type(arr * T(1.0))  # Unknown
reveal_type(T(1.0) + arr * T(1.0))  # int | float (incorrect!)

reveal_type(arr.__mul__(T(1.0)))  # Unknown
reveal_type(T(1.0) + arr.__mul__(T(1.0)))  # int | float (incorrect!)

This may be a result of some of the custom stuff done for #2077.

Version

ty 0.0.19

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions