Skip to content

This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]

Notifications You must be signed in to change notification settings

abdo-eldesokey/latentman

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LatentMan : Generating Consistent Animated Characters using Image Diffusion Models

CVPRW 2024

method

Environment

This code was tested on Python 3.8 with Cuda 12.1 and PyTorch 2.3.

  • To setup up the environemnt, simply create a Conda environemnt by running:
conda env create -f environment.yml
  • Follow the instructions for parts 2,3 in MDM's README.md to download the required files for Motion-Diffusion-Model code located in external/MDM.

  • You can download some examples of generated motions to workspace by running:

gdown "1IdaCPpRWrmRX5AVXXUXymwFVHXt2CNnW&confirm=t"
unzip workspace.zip

Getting Started

Please refer to getting_started.ipynb.

Acknowledgements

Parts of this code base are adapted from MDM, Detectron2, and MvDeCor.

Citation

If you use this code or parts of it, please cite our paper:

@inproceedings{eldesokey2024latentman,
  title={LATENTMAN: Generating Consistent Animated Characters using Image Diffusion Models},
  author={Eldesokey, Abdelrahman and Wonka, Peter},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7510--7519},
  year={2024}
}

About

This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]

Resources

Stars

Watchers

Forks