Skip to content

Fix xpu to cpu#1570

Merged
Titus-von-Koeller merged 2 commits intobitsandbytes-foundation:multi-backend-refactorfrom
jiqing-feng:xpu
Mar 24, 2025
Merged

Fix xpu to cpu#1570
Titus-von-Koeller merged 2 commits intobitsandbytes-foundation:multi-backend-refactorfrom
jiqing-feng:xpu

Conversation

@jiqing-feng
Copy link
Copy Markdown
Contributor

This PR fixed moving model from xpu to cpu.
Error can be reproduced by the following codes:

import torch
from transformers import AutoModelForCausalLM
from peft import get_peft_model, LoraConfig

model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m", load_in_8bit=True)
model.cpu()
weights_not_cpu = [name for name, p in model.named_parameters() if p.device != torch.device("cpu")]
lora_config = LoraConfig(use_dora=True)
peft_model = get_peft_model(model, lora_config)
print(peft_model)

Trace log:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, xpu:0 and cpu!

@jiqing-feng jiqing-feng marked this pull request as ready for review March 24, 2025 05:24
@Titus-von-Koeller
Copy link
Copy Markdown
Collaborator

LGTM, thanks!

@Titus-von-Koeller Titus-von-Koeller merged commit d3658c5 into bitsandbytes-foundation:multi-backend-refactor Mar 24, 2025
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
@jiqing-feng jiqing-feng deleted the xpu branch March 19, 2026 04:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants