Skip to content

Audit use of matmuls in backward formulas of linear algebra operations #68020

@ngimel

Description

@ngimel

By default, matmul on Ampere cards will use tf32, and this leads to unacceptable accuracy loss for linalg ops.
When linalg ops are using magma, we are disabling tf32, but in other cases there's no systematic approach.
see #67948

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @lezcano @zasdfgbnm @ptrblck

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulmodule: tf32Related to tf32 data formattriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions