Fix implementation of float64 in torch_utils#16058
Fix implementation of float64 in torch_utils#16058viking1304 wants to merge 2 commits intoAUTOMATIC1111:devfrom
Conversation
|
TBH, I am not sure one part of me thinks the new PR is totally reasonable providing 3.9 is nice but another part of me thinks that we should drop 3.9 compatibility
TLDR, as I am not the one making the decisions I don't need to use my brain to |
|
I didn't notice any problem with 3.11 on Mac, but I still use 3.10 for my main installation and when I am testing code before submitting PRs. I also managed to use 3.12 with just a few minor tweaks and issues. |
|
https://github.com/pytorch/pytorch/releases/tag/v2.3.0
|
|
Already merged the other one. |
Description
Implementation of float64 in torch_utils added in #15815 does not work as intended.
I was getting errors while testing PLMS, DDIM, DDIM CFG++ on my M3 Pro Mac.
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.This is fixed implementation of float64, which now works as it was intended.
Screenshots/videos:
Checklist: