Skip to content

Summary of Community Workflow Variants (Under Test), Tutorals, and Tips #51

@wymanCV

Description

@wymanCV

Dear all,

Huge thanks to @kijai and the whole community for their warmth, love and support!

You can find the latest updates on the native workflow from WanVideoWrapper Issue; SVI Section: json link. If you don’t want to wait, you can try out the this version of the workflow. Everything is now almost working as expected! After thoroughly tested and further optimized, we will update the final version to this repo. :)

According to user feedback, we’ve found that SVI 2.0 Pro not only supports single long-shot videos, but can also produce multi-shot videos with consistent character ID. You can find some demo videos generated via the workflow from the users in our github repo main page.

More interesting aspects of SVI 2.0 Pro are still under exploration. If you have any cool stuff, such as videos or workflows please feel free to share it with us. But, sorry that we are not ComfyUI workflow experts, if you have some desired features, please leave them to WanVideoWrapper Issue; SVI Section. Thanks!

Community Deployment

  • ❤️ Big thanks to @empiriolabsai for their support in bringing SVI-2.0 Pro to the Poe platform. Explore it here. You can interact with it via the Poe chat interface or integrate it through their API.

Some Tutorals (some in Chinese, sorry)

  • ❤️ Big thanks to the amazing Youtuber @AI Search for his fantastic SVI tutoral [Link]

  • ❤️ Big thanks to the amazing Youtuber @ComfyUI Workflow Blog making tutoral about generating 40-second highly dynamic videos witout any color degragation. [Link]

  • ❤️ Big thanks to the amazing Bilibili creator @AI Aiwood for his three amazing SVI tutorals for long-shot videos ([Link]), multi-shot videos ([Link]), and video extension ([Link])!

  • ❤️ Big thanks to the amazing Bilibili creators @AI 与AI同行1996 for his 1-min stress test of SVI without color drift! @AI绘视玩家 for his stress test of storytelling long videos. @三当家AI for the test of different Wan base models varients. and the videos from amazing Youtuber @Jaevlon.

Community Tips for Easier, Better Video Generation

Based on community feedback, the workflow version of SVI can be difficult to get right on the first try. However, once you become familiar with its characteristics and best practices, it can produce truly amazing videos: SVI workflow needs some patience. Therefore, we’ve collected and summarized a few tips and important notes below for reference.

PS: When configured properly, SVI is able to generate 1-minute videos with no noticeable color degradation, based on the blogger’s and our stress test [Link]. You can find that in our 1-min demos, there is no color degradation and significant slow motion. If you find any weird color shift within 20 second, that should be the problem about workflow setting.

  • 💡 Slow Motion: Based on community feedback, slow motion can often be mitigated by:

    • Use the optimal resolution (480p / 480×832 horizon) (key factor; consistent with training data)

    • Reduce or disable LightX2V (key factor)

    • Generate ultra-long videos and save with high fps (e.g, 24 fps with 10 clips, work well)

    • Increase sampling steps (key factor; work well)

    • Use Fp16 models

    • Use the smooth version of WAN

    • Try wallen0322’s workflow (or other community workflows)

    • Strengthen/refine prompts with an LLM (make each segment’s motion more explicit and continuous)

    • From Reddit about prompt enhancement:

      "I initially had issues with slow motion, but you just need to describe enough consecutive motion per segment. 'dolphin creature walking through the field' = slow motion, but 'dolphin creature sniffs the dandelion for a moment then flaps its wings menacingly as it backs up as if suspicious'..."

    • From our GitHub issue about fp16+more sampling:

      "I've been able to create some great things already and I do not have the slow motion issue; using WAN 2.2 high fp16 with a decent amount of steps and only a small portion with LightX2V."

  • 💡 🚨🚨🚨[12-30-2025] Random Seed: Please ensure the random seeds for differents clips are different! See here !

  • 💡[12-31-2025] LightX2V Lora: Using Wan 2.2 lightx2v or Wan 2.1 480p lightx2v can generate 1-min 480p videos without noticeable color dirfts. But using Wan 2.1 480p lightx2v lor 720p videos may lead to the dirft. Check out this.

  • 💡🚨[01-01-2026] Video Resolution: For vertical videos, try this workflow, which is much better. 480p is much more stable.

  • 💡🚨[01-01-2026] Model Quantization If possible, please use fp16 model. Otherwise, we cannot ensure the long-term quality, since our lora is trained with fp16.

  • 💡🚨🚨🚨[01-02-2026] Frame Number: Please uses 81-frame per-clip. Current model doesnot support 121 frames, which will lead to color degragation!!!

  • 💡[01-03-2026] Text Following: It’s recommended to describe each clip in detail rather (like a caption) than only specifying the motion. For example, you can use a state + motion format, where the state matches the final frame/state of the previous clip. Please use LLM to strongly enhance.

Reddit Post

  • 🗪 40-second without color shift Link
  • 🗪 Avatar 4 Test and 1+ minute test without folor shift (first comment) Link

Community Workflows Under Test

Here, we list — and will keep updating — a summary of different workflow variants from community contributors (not fully tested, just for developing usage). You can always come back to this issue to check for new updates!

P.S. Since SVI is maintained by a small research team and we don’t have much experience with ComfyUI workflows, it’s difficult for us to optimize, propose, and thoroughly test every workflow ourselves. Sorry about that. But we will try out best to assist.

  • 🛠️✅[12-27-2025] Native workflow: Big thanks to @kijai! [Link]. This needs the updated KJNode.

  • 🛠️ [12-28-2025] Support the last frame control: Big thanks to siraxe! [Link]. This cannot support large difference between the first and last frame. We will make SVI support last frame control in the next update.

  • 🛠️ [12-29-2025] Support smooth multi-shot videos with more reference frames! : Big thanks to TTT-ux-max! [Runninghub Link]

  • 🛠️ ✅[12-29-2025] Support for-loop! Big thanks to RuneGjerde! [Link]

  • 🛠️ [12-29-2025] Improves video dynamics! Big thanks to wallen0322! [Link] and node and [PR]. Reddit Post. This might have influence on consistency.

  • 🛠️[12-29-2025] Support video extension! : Big thanks to aiwood! [Runninghub Link] PS: We found that the implementation potentially has some minor issue (should be the last latent of VAE(T2V Video) instead of VAE(The last five frames of T2V Video)) leading to the some color shift of the first clip.

  • 🛠️[12-30-2025] Big thanks to Jaevlon! [Link]

  • 🛠️✅[01-01-2026] We received an optimized workflow (especially for vertical videos) from an amazing contributor who prefers to remain anonymous (mmmmmn), and they asked us to share/upload it to help the community users. You can find the workflow link and demo video here: link. Big thanks to this warm-hearted anonymous contributor!

  • 🛠️🔥[01-03-2026] Continuous generation SVI native gguf Link-v0.9 Link-v1.0

  • 🛠️🔥[01-04-2026] Workflow from this tutoral about generating 40-second highly dynamic videos witout any color degragation and slow motion. [Link]

  • 🛠️🔥[01-12-2026] Big thanks to darksidewalker for a deeply optimized native workflow! [Link]

Still have lots of amazing workflows, I'll come back soon...

Finally, we hope SVI 2.0 Pro can bring some happiness to your New Year holiday!

Best,
WymanCV

Metadata

Metadata

Assignees

No one assigned

    Labels

    comfyuicomfyui related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions