Official Implementation of "PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors" (SIGGRAPH 2023) (paper, video, talk)
-
Assets
- Deepmimic-MPL Humanoid
- Objects for interaction
- Retargeted motion data (check for the license)
-
Simulation Configuration Files
-
.yaml
files for whole-body and hand-only gyms - documentations about details
-
-
Shell script to install all external dependencies
-
Retargeting pipeline (Mixamo to Deepmimic-MPL Humanoid)
-
Whole-body Gym : training hand-equipped humanoid
- Model (Train / Test)
- pretrained weights
- Environments
- Model (Train / Test)
-
Hand-only Gym : training one hand to grab a bar
- Model (Train / Test)
- pretrained weight
- expert trajectories
- Environment
- Model (Train / Test)
Note) I'm currently focusing on the other projects mainly so this repo will be updated slowly. In case you require early access to the full implementation, please contact me through my personal website.
This code is based on Isaac Gym Preview 4.
Please run installation code and create a conda environment following the instruction in Isaac Gym Preview 4.
We assume the name of conda environment is pmp_env
.
Then, run the following script.
conda activate pmp_env
cd pmp
pip install -e .
This code is based on the official release of IsaacGymEnvs.
Especially, this code largely borrows implementations of AMP
in the original codebase (paper, code).
Our whole-body agent is modified from the humanoid in Deepmimic. We replace the sphere-shaped hands of the original humanoid with the hand from Modular Prosthetic Limb (MPL).
We use Mixamo animation data for training part-wise motion prior. We retarget mixamo animation data into our whole-body humanoid using the similar process used in the original codebase.