feat: Support LoKr/LoHa for SDXL and Anima#2275
Merged
Conversation
- networks/network_base.py: shared AdditionalNetwork base class with architecture auto-detection (SDXL/Anima) and generic module injection - networks/loha.py: LoHa (Low-rank Hadamard Product) module with HadaWeight custom autograd, training/inference classes, and factory functions - networks/lokr.py: LoKr (Low-rank Kronecker Product) module with factorization, training/inference classes, and factory functions - library/lora_utils.py: extend weight merge hook to detect and merge LoHa/LoKr weights alongside standard LoRA Linear and Conv2d 1x1 layers only; Conv2d 3x3 (Tucker decomposition) support will be added separately. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Added Tucker decomposition functionality to LoHa and LoKr modules. - Implemented new methods for weight rebuilding using Tucker decomposition. - Updated initialization and weight handling for Conv2d 3x3+ layers. - Modified get_diff_weight methods to accommodate Tucker and non-Tucker modes. - Enhanced network base to include unet_conv_target_modules for architecture detection.
…, see #2272 for details
…onModel for gen_img.py
…uanImage-2.1, and Anima-Preview
Contributor
There was a problem hiding this comment.
Pull request overview
This PR expands the codebase’s LoRA ecosystem by adding LyCORIS-family variants (LoHa/LoKr), introducing a shared AdditionalNetwork base with architecture auto-detection, and extending safetensors loading to merge LoHa/LoKr weights in addition to standard LoRA.
Changes:
- Added a shared
networks/network_base.pywith architecture detection and a genericAdditionalNetworkused by LyCORIS-style modules. - Introduced new
networks/loha.pyandnetworks/lokr.pyimplementations (train + inference + direct weight merge helpers). - Updated
library/lora_utils.pyto merge LoHa/LoKr weights during safetensors loading; added documentation for LoHa/LoKr usage.
Reviewed changes
Copilot reviewed 7 out of 8 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| networks/network_base.py | New shared base (AdditionalNetwork) + architecture detection + optimizer param grouping support for LoHa/LoKr-like modules. |
| networks/lora_anima.py | Reintroduces self-contained LoRAModule/LoRAInfModule in the Anima LoRA implementation (removes dependency on lora_flux). |
| networks/lokr.py | Adds LoKr module implementation (train/inference) and direct tensor-merge helper. |
| networks/loha.py | Adds LoHa module implementation (train/inference) and direct tensor-merge helper. |
| library/lora_utils.py | Extends safetensors load-time merge hook to support LoHa/LoKr in addition to standard LoRA. |
| docs/loha_lokr.md | New documentation describing LoHa/LoKr support, CLI usage, and options. |
| .gitignore | Ignores a references directory. |
| .ai/context/01-overview.md | Updates internal context docs to list additional supported model families. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…nsors_with_lora_and_fp8 function
…sd-scripts into feat-support-lokr-loha
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request introduces significant improvements to LoRA (Low-Rank Adaptation) support in the codebase, particularly enhancing the merging logic to better handle different LoRA variants, and reintroducing the
LoRAModuleandLoRAInfModuleclasses inlora_anima.py. The main focus is on increasing compatibility with LoHa and LoKr LoRA types, and providing a full, self-contained implementation of LoRA modules for both training and inference.LoRA merging and compatibility improvements:
weight_hook_funcinlora_utils.pyto check for and merge LoHa and LoKr weights in addition to standard LoRA, increasing compatibility with more LoRA variants. [1] [2]LoRA module implementation:
LoRAModuleandLoRAInfModuleclasses inlora_anima.py, providing a self-contained LoRA module for both training and inference, including dropout, rank dropout, module dropout, and merging logic for various layer types (Linear and Conv2d).These changes make the codebase more flexible and robust when working with different LoRA networks, and provide a clear, maintainable implementation of LoRA modules.