Skip to content

Latest commit

 

History

History
190 lines (145 loc) · 11.8 KB

tutorial-pose-estimation.md

File metadata and controls

190 lines (145 loc) · 11.8 KB

OmniTrax - Tutorial : Pose-Estimation


OmniTrax is a deep learning-driven multi-animal tracking and pose-estimation tool. It combines detection-based buffer-and-recover tracking with Blender's internal Motion Tracking pipeline to streamline the annotation and analysis process of large video files with hundreds of individuals. Integrating DeepLabCut-Live into this pipeline makes it possible to additionally run marker-less pose-estimation on virtually arbitrary numbers of animals, leveraging the existing DLC-Model-Zoo, as well as our own custom trained networks.

Alongside OmniTrax, we offer a selection of example video footage and trained networks. To curate your own datasets, as well as train further custom networks, refer to the official YOLO as well as DeepLabCut documentation.

Tutorial : Pose-Estimation

This tutorial assumes that you have successfully installed OmniTrax and have enabled the OmniTrax Add-on.

This tutorial is divided into two separate sections : Single-animal pose-estimation & Multi-animal pose-estimation. OmniTrax is primarily tailored to performing analysis of large groups of animals in discontinuous settings, utilising a two-step approach of (1) tracking all individuals and (2) performing pose-estimation on each returned ROI (region of interest).

To benefit from OmniTrax maximally, both, a trained detector and pose-estimator, are required. For the provided trained networks we aim to minimise manual labelling effort by training these networks on synthetically generated data

Pose-estimation and skeleton overlay example (trained on synthetic data)

Preparing a trained DeepLabCut model

When using your own DeepLabCut model, all you need to do is export the trained model and enter the resulting folder path as the DLC network path in the OmniTrax Panel.

Ensure the model has been trained with Tensorflow version 2.0 or higher to be compatible with the respective version of deeplabcut-live.

Open an instance of python and run the following commands (with your DeepLabCut environment activated):

import deeplabcut
cfg_path = "path/to/your/config.yaml"
deeplabcut.export_model(cfg_path, iteration=None, shuffle=1, trainingsetindex=0, snapshotindex=None)

Enter the folder path to the exported model as the DLC network path in the OmniTrax/Pose-Estimation (DLC) panel.

NOTE : Curating the required datasets and training a pose-estimator is beyond the scope of this introduction. For further information, refer to the official documentation of DeepLabCut.

System Console

We recommend keeping the Blender System Console open while using OmniTrax to monitor the tracking progression and make spotting potential issues easier.

Simply cick on Window > Toggle System Console to open it in a separate window (repeat the process to close in again)

Single-animal pose-estimation

Requirements

  • trained DeepLabCut model for full-frame pose-estimation
  • [optional]: hand-annotated (or auto-tracked) ROI to run inference on a subsection of the video

Used in this example

Setup and Inference

To run pose-estimation inference of individual animals across the full frame, you only need to specify the path to your exported trained model, as you will not need to supply sub-ROIs. You can therefore disregard the options for constant (input) detection sizes as well as the Pose (input) frame size (px) which will default to the original dimensions of the loaded video.

Keep in mind that given the focus on multi-animal applications, apart from ease of installation, there are limited benefits to running full-frame pose estimation for single animals within OmniTrax over a regular DeepLabCut build.

  • DLC network path : Path to the DIRECTORY of your trained and exported DLC network where your pose_cfg.yaml and snapshot files are stored. In order to enable plotting the defined skeleton as an overlay, simply include (a copy of) your original config.yaml file in the same folder. OmniTrax will read the skeleton configuration from the config.yaml file directly, so ensure that the naming conventions of config.yaml matches pose_cfg.yaml
  • Constant (input) detection sizes : If enabled, enforces constant input ROIs, as defined by the Pose (input) frame size (px) sizes. If not enabled, the tracking marker bounding box will determine the input ROI. NOTE : this option will have no effect in [full frame] mode.
  • Pose (input) frame size (px) : Constant detection size in pixels. All ROIs will be rescaled and padded, if necessary. NOTE : this option will have no effect in [full frame] mode.
  • pcutoff (minimum key point confidence) : Predicted key points with a confidence below this threshold will be discarded during pose-estimation.
  • Visualisation :
    • Plot skeleton : Plot the skeleton defined in config.yaml file based on the detected landmarks. In order to enable plotting the defined skeleton as an overlay, simply include your original config.yaml file in the same folder. OmniTrax will read the skeleton configuration from the config.yaml file directly, so ensure that the naming conventions of config.yaml matches pose_cfg.yaml
    • Keypoint marker size : Size of marker points (in pixels) displayed in pose-estimation preview
    • Skeleton line thickness : Line width of skeleton bones (in pixels) displayed in pose-estimation preview
    • Display label names : Display label names as an overlay in pose-estimation preview
  • Run Pose-Estimation:
    • Export pose-estimation video : Save the video with tracked overlay to the location of the input video.
    • Export pose-estimation data : Write estimated pose data to disk landmark locations in (relative) pixel space.

When you have completed configuring the pose-estimation process click on ESTIMATE POSES [full frame].

Multi-animal pose-estimation

Requirements

  • trained YOLO model for automated buffer-and-recover tracking and producing ROIs
    • alternatively: hand-annotated (or blender-tracked) ROI(s) to run inference on subsection(s) of the video
  • trained DeepLabCut model for (cropped frame) pose-estimation

Used in this example

Setup and Inference

To run pose-estimation inference of multiple animals, you will first need to track the footage to provide ROIs of all relevant individuals. Refer to our tracking tutorial for an in-depth guide of how to set up the automated buffer-and-recover tracker.

When using Constant detection sizes, make sure to keep them sufficiently large to include all body parts in the resulting ROIs you wish to consider in the pose-estimation step. Once you have finished tracking the video footage, continue in the Pose Estimation (DLC) panel:

  • DLC network path : Path to the DIRECTORY of your trained and exported DLC network where your pose_cfg.yaml and snapshot files are stored. In order to enable plotting the defined skeleton as an overlay, simply include (a copy of) your original config.yaml file in the same folder. OmniTrax will read the skeleton configuration from the config.yaml file directly, so ensure that the naming conventions of config.yaml matches pose_cfg.yaml
  • Constant (input) detection sizes : If enabled, enforces constant input ROIs, as defined by the Pose (input) frame size (px) detection sizes. If not enabled, the tracking marker bounding box will determine the input ROI. NOTE : this option will have no effect in [full frame] mode.
  • Pose (input) frame size (px) : Constant detection size in pixels. All ROIs will be rescaled and padded, if necessary.
  • pcutoff (minimum key point confidence) : Predicted key points with a confidence below this threshold will be discarded during pose-estimation.
  • Visualisation :
    • Plot skeleton : Plot the skeleton defined in config.yaml file based on the detected landmarks. In order to enable plotting the defined skeleton as an overlay, simply include your original config.yaml file in the same folder. OmniTrax will read the skeleton configuration from the config.yaml file directly, so ensure that the naming conventions of config.yaml matches pose_cfg.yaml
    • Keypoint marker size : Size of marker points (in pixels) displayed in pose-estimation preview
    • Skeleton line thickness : Line width of skeleton bones (in pixels) displayed in pose-estimation preview
    • Display label names : Display label names as an overlay in pose-estimation preview
  • Run Pose-Estimation:
    • Export pose-estimation video : Save the (cropped) video(s) with tracked overlay to the location of the input video.
    • Export pose-estimation data : Write estimated pose data to disk landmark locations in (relative) pixel space.

When you have completed configuring the pose-estimation process click on ESTIMATE POSES and OmniTrax will run inference on each extracted ROI defined in the tracking step.


Other examples

Of course, as long as you have a suitable trained model, use one from our list of trained networks, or the emerging model zoos, you can run pose-estimation on just about any footage.

[Full frame] pose-estimation, using the converted full_human model which you can download from our list of trained networks.

License

© Fabian Plum, 2021 MIT License