Skip to content

MPS ComplexDouble (complex128) Support Inconsistency #176981

@psiwho

Description

@psiwho

🐛 Describe the bug

torch.cdouble (complex128) tensors can be moved to the mps device without error via .to(device), but subsequent operations fail. Direct conversion to complex128 on the mps device is correctly blocked by a TypeError.

Reproduction

import torch
device = torch.device("mps")

# 1. Incorrectly allows move to MPS
x = torch.randn(2, 2, dtype=torch.cdouble, device="cpu").to(device) 
print(x.dtype) # torch.complex128 on mps

# 2. Crash on operation
# RuntimeError: Undefined type ComplexDouble
y = x * x 

# 3. Correctly blocks conversion (Inconsistent with above)
# TypeError: Trying to convert ComplexDouble to the MPS backend...
z = torch.randn(2, 2, device=device).to(torch.cdouble)

Expected Behavior

.to(device) should raise a TypeError when attempting to move a complex128 tensor to the MPS backend, as it is unsupported. Error messages should be consistent across conversion methods.

Versions

PyTorch: 2.10.0
OS: macOS (Apple Silicon, MPS enabled)
Device: MPS

cc @malfet @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @kulinseth @DenisVieriu97 @jhavukainen @aditvenk

Metadata

Metadata

Assignees

No one assigned

    Labels

    actionablebot-triagedThis is a label only to be used by the auto triage botlow priorityWe're unlikely to get around to doing this in the near futuremodule: complexRelated to complex number support in PyTorchmodule: error checkingBugs related to incorrect/lacking error checkingmodule: mpsRelated to Apple Metal Performance Shaders frameworktriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions