Skip to content

Make mv and addmv support torch.float16#75220

Closed
lezcano wants to merge 5 commits intogh/Lezcano/61/basefrom
gh/Lezcano/61/head
Closed

Make mv and addmv support torch.float16#75220
lezcano wants to merge 5 commits intogh/Lezcano/61/basefrom
gh/Lezcano/61/head

Conversation

@lezcano
Copy link
Collaborator

@lezcano lezcano commented Apr 4, 2022

In the next PR of this stack we're changing the use of an at::mm by an
at::mv. For us to be able to do this, we need at::mv to support the same
input dtypes as at::mm.

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 4, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 60dde5d (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@lezcano lezcano requested a review from ngimel April 4, 2022 20:29
@lezcano lezcano changed the title Make mv and related functions support torch.float16 Make mv and addmv support torch.float16 Apr 4, 2022
@lezcano lezcano added module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul topic: not user facing topic category labels Apr 4, 2022
lezcano added 2 commits April 4, 2022 22:01
In the next PR of this stack we're changing the use of an at::mm by an
at::mv. For us to be able to do this, we need at::mv to support the same
input dtypes as at::mm.

[ghstack-poisoned]
In the next PR of this stack we're changing the use of an at::mm by an
at::mv. For us to be able to do this, we need at::mv to support the same
input dtypes as at::mm.

[ghstack-poisoned]
lezcano added 2 commits April 6, 2022 14:22
In the next PR of this stack we're changing the use of an at::mm by an
at::mv. For us to be able to do this, we need at::mv to support the same
input dtypes as at::mm.

[ghstack-poisoned]
In the next PR of this stack we're changing the use of an at::mm by an
at::mv. For us to be able to do this, we need at::mv to support the same
input dtypes as at::mm.

[ghstack-poisoned]
@ngimel
Copy link
Collaborator

ngimel commented Apr 6, 2022

Should we instead disable (non-working) fp16 support for mm? #69969.
cc @mruberry, would that require going through a deprecation process?

@lezcano
Copy link
Collaborator Author

lezcano commented Apr 6, 2022

I'm happy with the hard break, as it's better to break hard than to silently give wrong results, but let's see what Mike has to say.

@ngimel ngimel closed this May 14, 2022
@facebook-github-bot facebook-github-bot deleted the gh/Lezcano/61/head branch June 13, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants