Add support for Style & Prompt node in custom workflows#2276
Conversation
Acly
left a comment
There was a problem hiding this comment.
This looks nice to have! Just had a few change requests.
ai_diffusion/custom_workflow.py
Outdated
| case ("ETN_KritaPromptStyle", _): | ||
| name = node.input("name", "Style") | ||
| yield CustomParam(ParamKind.style, name, node.input("sampler_preset", "auto")) | ||
| yield CustomParam( |
There was a problem hiding this comment.
It's probably enough to just use 1 yield CustomParam(ParamKind.synced_prompt), name)
It's only used to detect whether to show the prompt widget, right? The default also makes no sense to me, since the node doesn't have these inputs.
ai_diffusion/custom_workflow.py
Outdated
| prompt_style_count = sum(1 for _ in wf.find(type="ETN_KritaPromptStyle")) | ||
| if prompt_style_count > 1: | ||
| self.validation_error = _( | ||
| "Workflow contains multiple Krita Prompt Style nodes. " |
There was a problem hiding this comment.
I'm not 100% sure, but I think you need to put the string in 1 line so it's picked up by the translation script (it's just a crude regex that scans the code for things to translate)
ai_diffusion/custom_workflow.py
Outdated
| return self.workflow_id or "Custom Workflow" | ||
|
|
||
| def collect_parameters(self, layers: "LayerManager", bounds: Bounds, animation=False): | ||
| def collect_parameters(self, layers: "LayerManager", bounds: Bounds, animation=False, model=None): |
There was a problem hiding this comment.
I don't like this since it's kinda cyclical.
From what I see, writing the prompt to custom params isn't useful? The prompt comes from the model, which calls this function to write to params, then later it reads it out of params again to prepare/evaluate it. Seems kind of pointless round-trip to me, the model can just read the prompts directly when it prepares them.
ai_diffusion/model.py
Outdated
| if style is None: | ||
| return {} | ||
|
|
||
| positive = params.get(f"{style_name}/positive_prompt", "") |
There was a problem hiding this comment.
as noted above, can just read self.regions.positive directly here
ai_diffusion/model.py
Outdated
| arch = resolve_arch(style, self._connection.client_if_connected) | ||
| prepared = workflow.prepare_prompts(cond, style, seed, arch, FileLibrary.instance()) | ||
|
|
||
| params[f"{style_name}/_prepared"] = { |
There was a problem hiding this comment.
Rather than putting this fake entry into params, I'd prefer to keep it separate.
Extend api.py:
@dataclass
class CustomWorkflowInput:
workflow: dict
params: dict[str, Any]
positive_evaluated: str = ""
negative_evaluated: str = ""
loras: list[LoraInput] = field(default_factory=list)that way it's more explicit and typed
ai_diffusion/api.py
Outdated
| positive_evaluated: str = "" | ||
| negative_evaluated: str = "" | ||
| loras: list[LoraInput] = field(default_factory=list) | ||
| style: Style | None = None |
There was a problem hiding this comment.
There are some test failures because we can't have Style as part of api structures.
One way to resolve would be to use the exsting fields WorkflowInput.models and WorkflowInput.sampling to pass in the style.get_models() and sampler stuff instead.
|
Something like this? |
|
This should work. There seems to be a conflict now, please rebase onto main so the tests can run. If you still need to deal with the formatting issues, you can just run |
b64cdfc to
29f49c3
Compare
Features: - Prompt and Style widgets sync across Generate/Live/Animation and now Graph workspaces - Full prompt evaluation: wildcards, comments, layer extraction, style merging, LoRA extraction - LoRAs applied to model output - Error shown if workflow contains multiple KritaPromptStyle nodes (only one allowed since prompts sync to shared fields)
ea16fa2 to
32b4040
Compare
|
Rebased, formatted, squashed the commits |

Acly/comfyui-tooling-nodes#57
Closes #1380
Features: