[bug] fix using wrong model caused by alias#14655
Merged
AUTOMATIC1111 merged 1 commit intoAUTOMATIC1111:devfrom Jan 20, 2024
Merged
[bug] fix using wrong model caused by alias#14655AUTOMATIC1111 merged 1 commit intoAUTOMATIC1111:devfrom
AUTOMATIC1111 merged 1 commit intoAUTOMATIC1111:devfrom
Conversation
AUTOMATIC1111
approved these changes
Jan 20, 2024
Closed
Closed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
I'm training lora models and I have two models with same 'ss_output_name' but different filename (since I changed the filename after training). When I run inference on webui, I find the result doesn't match the model name I write in prompt. After adding code for displaying the model path for name in prompt, I found it is using the wrong model.
This bug may caused by:
For example, there is a model A.safetensors and 'ss_output_name' in metadate is 'A'. And there is a model B.safetensors which 'ss_output_name' in metadata is also 'A'. During constructing the available_network_aliases dict, A.safetensors is processed before B.safetensors. After processing B.safetensors, key 'A' in available_network_aliases dict is mapped to B.safetensors. Even through this alias is stored in forbidden_network_aliases, network_on_disk obj is still queried in available_network_aliases. When I use '<lora:A:1>' in prompt, it will call B.safetensors instead of A.safetensors.
Screenshots/videos:
For reproducing the bug, I use two lora models from civitai.
l0ngma1d.safetensors
Yatogami Tohka(dal).safetensors
The original filename and 'ss_output_name' is same. For reproducing this bug, I changed the 'ss_output_name' in Yatogami Tohka(dal).safetensors to 'l0ngma1d'. And here are the models I used.
https://drive.google.com/drive/folders/1vRg9h2P3H28zTaz-9QZOS9Pw_3cI-bl9?usp=drive_link
Generation parameters
l0ngma1d
Yatogami Tohka(dal)
Inference with origin model
Inference with modified model
Inference with modified model (My code)
Checklist: