Skip to content
/ MVOC Public template

code for "MVOC:atraining-free multiple video object composition method with diffusion models"

License

Notifications You must be signed in to change notification settings

SobeyMIL/MVOC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MVOC: a training-free multiple video object composition method with diffusion models

arXiv License

🌐 Homepage | 📖 arXiv

This repo is an official pytorch implementation of the paper "MVOC: a training-free multiple video object composition method with diffusion models"

Introduction

MVOC is a training-free multiple video object composition framework aimed at achieving visually harmonious and temporally consistent results.

Given multiple video objects (e.g. Background, Object1, Object2), our method enables presenting the interaction effects between multiple video objects and maintaining the motion and identity consistency of each object in the composited video.

▶️ Quick Start for MVOC

Environment

Clone this repo and prepare Conda environment using the following commands:

git clone https://github.com/SobeyMIL/MVOC

cd MVOC
conda env create -f environment.yml

Pretrained model

We use i2vgen-xl to inverse the videos and compose them in a training-free manner. Download it from huggingface and put it at i2vgen-xl/checkpoints.

Video Composition

We offer the videos we use in the paper, you can find it at demo

First, you need to get the latent representation of the source videos, we offer the inversion config file at i2vgen-xl/configs/group_inversion/group_config.json.

Then you can run the following command:

cd i2vgen-xl/scripts
bash run_group_ddim_inversion.sh
bash run_group_composition.sh

Results

We provide some composition results in this repo as below.

Demo Collage Our result
boat_surf
crane_seal
duck_crane
monkey_swan
rider_deer
robot_cat
seal_bird

🖊️ Citation

Please kindly cite our paper if you use our code, data, models or results:

@inproceedings{wang2024mvoc,
        title = {MVOC: a training-free multiple video object composition method with diffusion models},
        author = {Wei Wang and Yaosen Chen and Yuegen Liu and  Qi Yuan and  Shubin Yang and  Yanru Zhang},
        year = {2024},
        booktitle = {arxiv}
}

🎫 License

This project is released under the MIT License.

💞 Acknowledgements

The code is built upon the below repositories, we thank all the contributors for open-sourcing.

About

code for "MVOC:atraining-free multiple video object composition method with diffusion models"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published