Skip to content

Installation & Testing

Jens Schweihoff edited this page Apr 26, 2021 · 16 revisions

Hardware requirements

Your hardware, obviously, should be in compliance with DeepLabCut/SLEAP/DPK requirements to be able to run DeepLabStream. But DeepLabStream also requires more processing power than DeepLabCut/SLEAP/DPK to be easy and convenient to use. We are really not recommending using this software without GPU, even though DeepLabCut supports this.

In general, you need:

  • CUDA-compatible Nvidia videocard (Nvidia GTX 1080 or better is recommended);
  • CPU with at least 4 cores to properly utilize parallelization;
  • Decent amount of RAM, at least 16 Gb;

We tested DeepLabStream with different setups and would say that a minimum reasonable configuration would be:

CPU: Intel Core i7-7700K CPU @ 4.20GHz
RAM: 32GB DDR4
GPU: Nvidia GeForce GTX 1050 (3GB)

However, our recommended setup, with which we did achieve constant 30 FPS with a two camera setup at 848x480 resolution:

CPU: Intel Core i7-9700K @ 3.60GHz
RAM: 64 GB DDR4
GPU: Nvidia GeForce RTX 2080 (12GB) 

Preparations

In short, you need to be able to run DeepLabCut/DeepPoseKit/SLEAP on your system before installing and running DeepLabStream.

DeepLabStream was originally designed with DeepLabCut v1.11 in mind and also works with newer versions (e.g. maDLC), but for the ease of installation and comparability we recommend DLC-LIVE if you want a simple setup using DLC based networks and SLEAP if you are interested in multiple animal experiments. We also offer an experimental integration of DeepPoseKit networks (StackedDenseNet, StackedHourGlass) or LEAP.

All versions and networks trained with them worked fine within our tests (12/2020). For installation of your pose estimation of choice, please also refer to the main GitHub pages of the original package. Links are below in each section.

WE RECOMMEND USING DIFFERENT ENVIRONMENTS FOR EACH POSE ESTIMATION NETWORK PROVIDER!

Using SLEAP:

Due to SLEAP's dependencies, we recommend to install DLStream on top of SLEAP rather then the other way around. We had reports of conflicting versions of OpenCV that get resolved by this.

pip install sleap

You will need at least version 1.13 for DLStream, however we recommend using the latest stable version.

If you want to use DLC-Live based networks (meaning networks exported to work with DLC-Live) please install the dlclive package along with tensorflow:

pip install deeplabcut-live

If you want to use DeepPoseKit derived networks (meaning networks trained and exported by DeepPoseKit: StackedDenseNet, StackedHourGlass or LEAP) please install the deepposekit package along with tensorflow:

pip install deepposekit

Installing Tensorflow (necessary for DLC, DPK, DLC-Live):

General tip: If you have not yet gone through the steps of installing tensorflow, a simple solution (if you are on windows and use anaconda environments) is to install the tensorflow version of choice by simply using:

conda install tensorflow-gpu==VERSION.NUMBER

This will install tensorflow alongside with CUDA and cudnn (which are necessary to use tensorflow with your GPU) inside the environment. Be aware that not all pose estimation network providers have the same tensorflow requirements, so installing them in seperate environments will save you alot of headaches...

Using the original DeepLabCut:

Here is a full instruction by DeepLabCut, but we will provide a short version/checklist below.

  1. Make sure that you have the proper Nvidia drivers installed;
  2. Install CUDA by Nvidia. Please refer to this table to ensure you have the correct driver/CUDA combination;
  3. Verify your CUDA installation on Windows or Linux;
  4. Create an environment. We strongly recommend using environments provided by DeepLabCut;
  5. If you are not using DeepLabCut-provided environments for step 4, install cuDNN. Otherwise, skip this step;
  6. Make sure that Tensorflow is installed in your environment. Manual installation goes as followed:
    pip install tensorflow-gpu==1.12
    But a lot of different problems could arise, depending on your software and hardware setup;
  7. Verify that your TensorFlow is working correctly by using this (Linux) or this (Windows) manual. The latter also provides a great overview of the whole process with the previous six steps.

DeepLabStream (DLStream) installation

The easiest way of installing DLStream would be the following:

(Make sure that you are working in the same environment that you installed your pose estimation provider of choice in!)

git clone https://github.com/SchwarzNeuroconLab/DeepLabStream.git
cd DeepLabStream
pip install -r requirements.txt

If you want to create a new environment (we recommend using Anaconda) for DLStream. Just use in an anaconda prompt:

conda create -n dlstream python=3.6
pip install -r requirements.txt

You can change "dlstream" in conda create -n dlstream python=3.6 with a different name. If you are planning to test/use multiple pose estimation provider, we recommend a naming scheme like this to avoid confusion:

  • SLEAP: dlstream_sleap

  • DLC/DLC-Live: dlstream_dlc

  • DPK: dlstream_dpk

In this case, you still need to install DeepPoseKit, DLC-live or DLC including tensorflow on top of this!

For SLEAP it is recommended to install first SLEAP and then DLStream. If you did not do this reinstalling OpenCV should solve the issue:

pip uninstall opencv-python-headless
pip uninstall opencv-python
pip install opencv-python==3.4.5.20

Config editing

You need to modify the DeepLabStream config in settings.ini after installation to specify with which model it will work.

  1. Change variables in the [Streaming] portion of the config to the most suitable for you:

    • RESOLUTION - choose the resolution, supported by your camera and network
    • FRAMERATE - choose the framerate, supported by your camera
    • OUTPUT_DIRECTORY - folder for data and video output
    • CAMERA_SOURCE - if you are not using RealSense or Basler cameras, you need to choose the correct source for your camera manually. It should be recognized by openCV.
    • STREAMING_SOURCE - you can use "camera", "ipwebcam" or "video" to select your input source
  2. Change the variables in the [Pose Estimation] portion of the config to select your network choice

    • MODEL_ORIGIN = possible origins are DLC, DLC-LIVE, MADLC, DEEPPOSEKIT and SLEAP

    • MODEL_PATH =

        - The full path to the exported model (`DLC-LIVE`, `DEEPPOSEKIT`)
      
        - folder of your DLC installation (see below)
      
        - `SLEAP` the path to the folder of the model or models that you want to use:
      
            - `single instance tracking`: D:\SLEAP\2animal_diffcolor\models\210210_132803.single_instance.1227
      
            - `multiple instance tracking`: D:\SLEAP\example_data\models\baseline_model.centroids, D:\SLEAP\example_data\models\baseline_model.topdown
      
    • MODEL_NAME = Name of the model you want to use. Only necessary for original DLC and for benchmarking.

    • ALL_BODYPARTS = used in DLC-LIVE, DeepPoseKit and SLEAP for now to create posture (has to be in right order!); if left empty or to short, auto-naming will be enabled in style bp0, bp1 ...

For original DLC and early DLStream versions:

  1. Change MODEL_PATH (early: DLC_PATH) variable to wherever your DeepLabCut installation is.

If you installed it like a package with DeepLabCut's provided environment files, it would be approximately here in your Anaconda environment: ../anaconda3/envs/dlc-ubuntu-GPU/lib/python3.6/site-packages/deeplabcut. Of course, the specific folder may vary.

  1. Change the MODEL_NAME (early: MODEL) variable to the name of your model, found in ../deeplabcut/pose_estimation_tensorflow/models (../deeplabcut/pose_estimation/models for DLC v1.11) folder. if you are using DeepLabCut 2.+ you first have to copy the model folder from the corresponding DLC project directory into the aforementioned pose estimation models folder.

Multicam support

To correctly enable multiple camera support, you need not only to set the variable MULTIPLE_DEVICES to True in the config, but also edit one of the DeepLabCut files.

Locate the file predict.py in your DeepLabCut folder (for DLC v2.x it would be in ../deeplabcut/pose_estimation_tensorflow/nnet folder), and change the line in function setup_pose_prediction

sess = TF.Session()

to the following lines, maintaining the correct indentation

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)

Intel RealSense support

DeepLabStream was written with Intel RealSense cameras support in mind, to be able to get depth data and use infrared cameras for experiments in low-lighting conditions.

To enable these features, you need to install an additional Python library: PyRealSense2

pip install pyrealsense2

In an ideal scenario, that would install it fully, but in some specific cases, for example, if you are using Python 3.5 on a Windows machine, the corresponding wheel file can not be available. If that is the case, you need to manually build it from source from this official GitHub repository.

Basler Pylon support

DeepLabStream also supports the usage of Basler cameras through their Python wrapper pypylon .

To enable this, you need to install an additional Python library: PyPylon

pip install pypylon

or use the official provided instruction

git clone https://github.com/basler/pypylon.git
cd pypylon
pip install .

Generic camera support

If you wish to not use either Intel RealSense or Basler cameras, DeepLabStream can work with any camera supported by opencv.

By default, DeepLabStream would try to open a camera from source 0 (like cv2.VideoCapture(0)), but you can modify this and use a camera from any source. Your resolution and framerate, described in the config would also apply, but beware that opencv does not always support every native camera resolution and/or framerate. Some experimenting might be required.

Very important note: with this generic camera mode you will not be able to use multiple cameras!

IP webcam support

If you wish to use a generic webcam connected to a computer on your network (rather than directly on your DLStream computer), you can use the [IPWEBCAM] section to configure this. We are using the SmoothStream code as a basis. So you will need to setup your webcam on the sending computer using their repo. Note, that this will most likely result in a framerate drop due to network traffic and is not recommended. IPWEBCAM = True will overwrite any other camera input, but not video.

You will need to install pyzmq in your DLStream environment.

pip install pyzmq==20.0.0

Prerecorded Video support

If you wish to use a prerecorded Video as input for DLStream, you can use the setting parameters in the [Video] section for it. Note, that VIDEO = True will overwrite any camera input.

Stimulation and GPIO output

Currently, DLStream supports NI, Raspberry PI and Arduino Boards for GPIO output to trigger stimulation from external devices.

Check out the OUT-OF-THE-BOX section to see how to set up those devices.

Testing

To properly test your DeepLabStream installation, we included a testing script that you can run in three different modes. DeepLabStream.py allows you to test your cameras, your DeepLabcut installation, and to benchmark your DeepLabStream performance.

  1. Run the following command to test your cameras:
python DeepLabStream.py
  1. Next, you can test how your DeepLabCut installation behaves and if you did correctly set the DeepLabCut path in the config:
python DeepLabStream.py --dlc-enabled
  1. And finally you can benchmark your system automatically:
python DeepLabStream.py --dlc-enabled --benchmark-enabled

The stream would run until it gets 3000 analyzed frames (you can always stop it manually at any point, just press 'Q' while the stream window is in focus). Then it will show you a detailed statistic of the overall performance timings, analysis timings, percentage of frames where it did lose tracking and your average FPS.

Recording testing

Additionally, you can test and see the results of the build-in video recorder. Run the following command to test it:

python DeepLabStream.py --recording-enabled

This will record the videofeed from the camera to your OUTPUT_DIRECTORY. You can also add this flag to any of the previously mentioned tests to check performance with recording enabled.

Important note: recording will always save only "raw" video, without analysis, with framerate as close to specified as possible

Tensorflow 2.* and RTX 30XX support for DLC

The problem

DLC (both versions 1.x and 2.x) only support tensorflow 1.*

Newer GPUs 30XX series do not support CUDA 10, only 11

Tensorflow 1.* does not support CUDA 11

The solution

(!) This solution is working on the moment of writing (03.12.2020) and subject to change in the future

  1. Uninstall tensorflow 1.* and tensorflow-gpu 1.*

  2. Uninstall CUDA 10

  3. Uninstall cudnn 7.* or earlier

    (!) These steps above are applicable for conda env with cudatoolkit and cudnn as well. Check that your conda list does not return anything related to CUDA, cudnn or tensorflow.

  4. Install CUDA 11.1

  5. Download cudnn 11.1 and place all files from there according to the instruction

  6. Go to \CUDA\v11.1\bin, copy file cusolver64_11.dll, paste it in the same directory, rename it to cusolver64_10.dll

  7. pip install tf-nightly-gpu

  8. pip install tf_slim

  9. Clone alpha version of DLC with support of tensorflow 2.* from here

  10. Change your predict.py file in deeplabcut-core (see Multicam support to locate this file) to:

sess = TF.Session()
gpus = tf.config.experimental.list_physical_devices(device_type='GPU')
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)
  1. Install it as a package from local repo (cd to repo folder, then pip install -e .)
  2. Change imports of DLC in poser.py to deeplabcutcore:
import deeplabcutcore.pose_estimation_tensorflow.nnet.predict as predict
from deeplabcutcore.pose_estimation_tensorflow.config import load_config
  1. Check if your GPU is enabled in tensorflow in python console:
import tensorflow as tf
tf.test.is_gpu_available(cuda_only=False,min_cuda_compute_capability=None)

It should end in True

After that your installation of DeepLabStream should work with tensorflow 2.* and CUDA 11, utilizing your GPU properly.

Clone this wiki locally