Skip to content

tucker666/Direct3DKinematicEstimation

 
 

Repository files navigation

Towards single camera human 3D-kinematics

This repository is currently under construction

Installation

  1. Requirement

     Python 3.8.0 
     PyTorch 1.11.0
     OpenSim 4.3+        
    
  2. Python package

    Clone this repo and run the following:

     conda env create -f environment_setup.yml
    

    Activate the environment using

     conda activate d3ke
    
  3. OpenSim 4.3

    1. Download and Install OpenSim

    2. (On Windows)Install python API

      • In installation_folder/OpenSim 4.x/sdk/Python, run

          python setup_win_python38.py
        
          python -m pip install .
        
    3. (On other operating systems) Follow the instructions to setup the opensim scripting environment here

    4. Copy all *.obj files from resources/opensim/geometry to <installation_folder>/OpenSim 4.x/Geometry

    Note: Scripts requiring to import OpenSim are only verified on Windows.

Dataset and SMPL+H models

  1. BMLmovi
    • Register to get access to the downloads section.
    • Download .avi videos of PG1 and PG2 cameras from the F round (F_PGX_Subject_X_L.avi).
    • Download Camera Parameters.tar.
    • Download v3d files (F_Subjects_1_45.tar).
  2. AMASS
    • Download SMPL+H body data of BMLmovi.
  3. SMPL+H Models
    • Register to get access to the downloads section.
    • Download the extended SMPL+H model (used in AMASS project).
  4. DMPLs
    • Register to get access to the downloads section.
    • Download DMPLs for AMASS.
  5. PASCAL Visual Object Classes (ONLY NECESSARY FOR TRAINING)
    • Download the training/validation data

Unpacking resources

  1. Unpack the downloaded SMPL and DMPL archives into ms_model_estimation/resources

  2. Unpack the downloaded AMASS data into the top-level folder resources/amass

  3. Unpack the F_Subjects_1_45 folder and unpack content of all subfolders into resources/V3D/F

OpenSim GT Generation

Run the generate_opensim_gt script:

python generate_opensim_gt.py

This process might take several hours!

Once the dataset is generated the scaled OpenSim model and motion files can be found in resources/opensim/BMLmovi/BMLmovi

Dataset Preparation

After the ground truth has been generated, the dataset needs to be prepared.

Run the prepare_dataset script and provide the location where the BMLMovi videos are stored:

python prepare_dataset.py --BMLMoviDir path/to/bmlmovi/videos

NOTE: for generating data for training you should also provide the path to the Pascal VOC dataset

python prepare_dataset.py --BMLMoviDir path/to/bmlmovi/videos --PascalDir path/to/pascal_voc/data

This process might again take several hours!

Evaluation

Download models

Run inference

Run the run_inference script :

python run_inference.py

This will use D3KE to run predictions on the subset of BMLMovi used for testing.

Training

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • Jupyter Notebook 0.6%