Skip to content

continue-revolution/sd-webui-animatediff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AnimateDiff for Stable Diffusion WebUI

I have recently added a non-commercial license to this extension. If you want to use this extension for commercial purpose, please contact me via email.

This extension aim for integrating AnimateDiff with CLI into AUTOMATIC1111 Stable Diffusion WebUI with ControlNet, and form the most easy-to-use AI video toolkit. You can generate GIFs in exactly the same way as generating images after enabling this extension.

This extension implements AnimateDiff in a different way. It inserts motion modules into UNet at runtime, so that you do not need to reload your model weights if you don't want to.

You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI, which could be quite useful for inpainting.

Forge users should either checkout branch forge/master in this repository or use sd-forge-animatediff. They will be in sync.

Table of Contents

Update | Future Plan | Model Zoo | Documentation | Tutorial | Thanks | Star History | Sponsor

Update

  • v2.0.0-a in 03/02/2024: The whole extension has been reworked to make it easier to maintain.
    • Prerequisite: WebUI >= 1.8.0 & ControlNet >=1.1.441 & PyTorch >= 2.0.0
    • New feature:
      • ControlNet inpaint / IP-Adapter prompt travel / SparseCtrl / ControlNet keyframe, see ControlNet V2V
      • FreeInit, see FreeInit
    • Minor: mm filter based on sd version (click refresh button if you switch between SD1.5 and SDXL) / display extension version in infotext
    • Breaking change: You must use Motion LoRA, Hotshot-XL, AnimateDiff V3 Motion Adapter from my huggingface repo.
  • v2.0.1-a in 07/12/2024: Support AnimateLCM from MMLab@CUHK. See here for instruction.

Future Plan

Although OpenAI Sora is far better at following complex text prompts and generating complex scenes, we believe that OpenAI will NOT open source Sora or any other other products they released recently. My current plan is to continue developing this extension until when an open-sourced video model is released, with strong ability to generate complex scenes, easy customization and good ecosystem like SD1.5.

We will try our best to bring interesting researches into both WebUI and Forge as long as we can. Not all researches will be implemented. You are welcome to submit a feature request if you find an interesting one. We are also open to learn from other equivalent software.

That said, due to the notorious difficulty in maintaining sd-webui-controlnet, we do NOT plan to implement ANY new research into WebUI if it touches "reference control", such as Magic Animate. Such features will be Forge only. Also, some advanced features in ControlNet Forge Intergrated, such as ControlNet per-frame mask, will also be Forge only. I really hope that I could have bandwidth to rework sd-webui-controlnet, but it requires a huge amount of time.

Model Zoo

I am maintaining a huggingface repo to provide all official models in fp16 & safetensors format. You are highly recommended to use my link. You MUST use my link to download Motion LoRA, Hotshot-XL, AnimateDiff V3 Motion Adapter. You may still use the old links if you want, for all other models

Documentation

Tutorial

There are a lot of wonderful video tutorials on YouTube and bilibili, and you should check those out for now. For the time being, there are a series of updates on the way and I don't want to work on my own before I am satisfied. An official tutorial should come when I am satisfied with the available features.

Thanks

We thank all developers and community users who contribute to this repository in many ways, especially

Star History

Star History Chart

Sponsor

You can sponsor me via WeChat, AliPay or PayPal. You can also support me via ko-fi or afdian.

WeChat AliPay PayPal
216aff0250c7fd2bb32eeb4f7aae623 15fe95b4ada738acf3e44c1d45a1805 IMG_1419_