Skip to content

sony/MoLA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by Adversarial Training

This repository is the official implementation of "MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by Adversarial Training"

Setup

Environment

Install the packages in requirements.txt.

pip install -r requirements.txt

We test our code on Python 3.8.10 and PyTorch 2.1.0

Dependenicies

Run the script to download dependencies materials:

bash prepare/download_smpl_model.sh
bash prepare/prepare_clip.sh
bash prepare/download_t2m_evaluators.sh

Dataset

Please refer to HumanML3D for text-to-motion dataset setup. Copy the result dataset to our repository:

cp -r ../HumanML3D/HumanML3D ./datasets/humanml3d

Training

The training setup can be adjusted in the config files: *.yaml in /configs.

Traning Stage 1 (VAE+SAN):

Run the following script:

python train.py --cfg ./configs/config_stage1.yaml --cfg_assets ./configs/assets.yaml --nodebug

Training Stage 2(Conditional motion latent diffusion):

Run the following script:

python train.py  --cfg ./configs/config_stage2.yaml  --cfg_assets ./configs/assets.yaml  --nodebug

Evaluation

Please first put the tained model checkpoint path to TEST.CHECKPOINT in config_stage1.yaml and config_stage2.yaml.

Stage 1:

To evaluate reconstruction performance of stage 1 model, run the following command:

python test.py --cfg ./configs/config_stage1.yaml --cfg_assets ./configs/assets.yaml 

Stage 2:

To evaluate motion generation performance of stage 1 model, run the following command:

python test.py --cfg ./configs/config_stage2.yaml --cfg_assets ./configs/assets.yaml 

Visualizing generated samples

We support text file (for text-to-motion) and npy file(for control signal on motion editing) as input. The generated/edited motions are npy files.

Text-to-Motion

python visualize_test.py --cfg ./configs/config_stage2.yaml  --cfg_assets ./configs/assets.yaml --example ./demo/example.txt

Motion editing

python visualize_test.py --cfg ./configs/config_stage2.yaml  --cfg_assets ./configs/assets.yaml --example ./demo/example.txt --editing --control ./demo/control_example_start_end.npy

Citation

@article{uchida2024mola,
        title={MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by Adversarial Training},
        author={Uchida, Kengo and Shibuya, Takashi and Takida, Yuhta and Murata, Naoki and Takahashi, Shusuke and Mitsufuji, Yuki},
        journal={arXiv preprint arXiv:2406.01867},
        year={2024}
      }

Reference

Part of the code is borrowed from the following repositories. We would like to thank the authors of these repos for their excellent work: MLD, HumanML3D, MPGD, SAN, T2M-GPT.