ComfyUI-native nodes to run First Order Motion Model for Image Animation and its non-diffusion-based successors.
https://github.com/AliaksandrSiarohin/first-order-model
Now supports:
- Face Swapping using Motion Supervised co-part Segmentation:
- Motion Representations for Articulated Animation
- Thin-Plate Spline Motion Model for Image Animation
More will come soon
Dame.Comparison.mp4
relative_movement
: Relative keypoint displacement (Inherit object proporions from the video)relative_jacobian
: Only taken into effect whenrelative_movement
is on, must also be on to avoid heavy deformation of the face (in a freaky way)adapt_movement_scale
: If disabled, will heavily distort the source face to match the driving facefind_best_frame
: Find driving frame that best match the source. Split the batch into two halves, with the first half reversed. Gives mixed results. Needs to installface-alignment
library.
blend_scale
: No idea, keeping at default = 1.0 seems to be fineuse_source_seg
: Whether to use the source's segmentation or the target's. May help if some of the target's segmentation regions are missinghard_edges
: Whether to make the edges hard, instead of featheringuse_face_parser
: For Seg-based models, may help with cleaning up residual background (should only use15seg
with this). TODO: Additional cleanup face_parser masks. Should definitely be used for FOMM modelsviz_alpha
: Opacity of the segments in the visualization
Doesn't need any
predict_mode
: Can berelative
: Similar to FOMM'srelative_movement
andadapt_movement_scale
set to Truestandard
: Similar to FOMM'sadapt_movement_scale
set to Falseavd
: similar torelative
, may yield better but more "jittery/jumpy" result
find_best_frame
: Same as FOMM
- Clone the repo to
ComfyUI/custom_nodes/
git clone https://github.com/FuouM/ComfyUI-FirstOrderMM.git
- Install required dependencies
pip install -r requirements.txt
Optional: Install face-alignment to use the find_best_frame
feature:
pip install face-alignment
FOMM currently supporting vox
and vox-adv
. Models can and must be manually downloaded from:
Part Swap currently supporting Seg-based models and FOMM (vox
and vox-adv
) models.
vox-5segments
vox-10segments
vox-15segments
vox-first-order (partswap)
These models can be found in the original repository Motion Supervised co-part Segmentation
Place them in the checkpoints
folder. It should look like this:
place_checkpoints_here.txt
vox-adv-cpk.pth.tar
vox-cpk.pth.tar
vox-5segments.pth.tar
vox-10segments.pth.tar
vox-15segments.pth.tar
vox-first-order.pth.tar
For Part Swap, Face-Parsing is also supported (Optional) (especially when using the FOMM or vox-first-order
models)
- resnet18
resnet18-5c106cde
: https://download.pytorch.org/models/resnet18-5c106cde.pth - face_parsing
79999_iter.pth
: https://github.com/zllrunning/face-makeup.PyTorch/tree/master/cp
Place them in face_parsing
folder:
face_parsing_model.py
...
resnet18-5c106cde.pth
79999_iter.pth
For Articulate, download the model from Pre-trained checkpoints section and place it here: articulate_module/models/vox256.pth
For Spline, download the model from Pre-trained models section and place it here: spline_module/models/vox.pth.tar
. To use find_best_frame
, install face-alignment
.