Skip to content

Latest commit

 

History

History
375 lines (327 loc) · 13.5 KB

windows.md

File metadata and controls

375 lines (327 loc) · 13.5 KB

Build for Windows


Currently, MMDeploy only provides build-from-source method for windows platform. Prebuilt package will be released in the future.

Build From Source

All the commands listed in the following chapters are verified on Windows 10.

Install Toolchains

  1. Download and install Visual Studio 2019
  2. Add the path of cmake to the environment variable PATH, i.e., "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin"
  3. Install cuda toolkit if NVIDIA gpu is available. You can refer to the official guide.

Install Dependencies

Install Dependencies for Model Converter

NAME INSTALLATION
conda Please install conda according to the official guide.
After installation, open anaconda powershell prompt under the Start Menu as the administrator, because:
1. All the commands listed in the following text are verified in anaconda powershell
2. As an administrator, you can install the thirdparty libraries to the system path so as to simplify MMDeploy build command
Note: if you are familiar with how cmake works, you can also use anaconda powershell prompt as an ordinary user.
PyTorch
(>=1.8.0)
Install PyTorch>=1.8.0 by following the official instructions. Be sure the CUDA version PyTorch requires matches that in your host.

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
mmcv-full Install mmcv-full as follows. Refer to the guide for details.

$env:cu_version="cu111"
$env:torch_version="torch1.8.0"
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/${cu_version}/${torch_version}/index.html

Install Dependencies for SDK

You can skip this chapter if you are only interested in the model converter.

NAME INSTALLATION
OpenCV
(>=3.0)
1. Find and download OpenCV 3+ for windows from here.
2. You can download the prebuilt package and install it to the target directory. Or you can build OpenCV from its source.
3. Find where OpenCVConfig.cmake locates in the installation directory. And export its path to the environment variable PATH like this,
$env:path = "\the\path\where\OpenCVConfig.cmake\locates;" + "$env:path"
pplcv A high-performance image processing library of openPPL.
It is optional which only be needed if cuda platform is required.

git clone https://github.com/openppl-public/ppl.cv.git
cd ppl.cv
git checkout tags/v0.7.0 -b v0.7.0
$env:PPLCV_DIR = "$pwd"
mkdir pplcv-build
cd pplcv-build
cmake .. -G "Visual Studio 16 2019" -T v142 -A x64 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=install -DHPCC_USE_CUDA=ON -DPPLCV_USE_MSVC_STATIC_RUNTIME=OFF
cmake --build . --config Release -- /m
cmake --install . --config Release
cd ../..

Install Inference Engines for MMDeploy

Both MMDeploy's model converter and SDK share the same inference engines. You can select your interested inference engines and do the installation by following the given commands.

Currently, MMDeploy only verified ONNXRuntime and TensorRT for windows platform. As for the rest, MMDeploy will support them in the future.

NAME PACKAGE INSTALLATION
ONNXRuntime onnxruntime
(>=1.8.1)
1. Install python package
pip install onnxruntime==1.8.1
2. Download the windows prebuilt binary package from here. Extract it and export environment variables as below:

Invoke-WebRequest -Uri https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-win-x64-1.8.1.zip -OutFile onnxruntime-win-x64-1.8.1.zip
Expand-Archive onnxruntime-win-x64-1.8.1.zip .
$env:ONNXRUNTIME_DIR = "$pwd\onnxruntime-win-x64-1.8.1"
$env:path = "$env:ONNXRUNTIME_DIR\lib;" + $env:path
TensorRT
TensorRT
1. Login NVIDIA and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from here. Follow the guide to install TensorRT.
2. Here is an example of installing TensorRT 8.2 GA Update 2 for Windows x86_64 and CUDA 11.x that you can refer to.
First of all, click here to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:

cd \the\path\of\tensorrt\zip\file
Expand-Archive TensorRT-8.2.3.0.Windows10.x86_64.cuda-11.4.cudnn8.2.zip .
pip install $env:TENSORRT_DIR\python\tensorrt-8.2.3.0-cp37-none-win_amd64.whl
$env:TENSORRT_DIR = "$pwd\TensorRT-8.2.3.0"
$env:path = "$env:TENSORRT_DIR\lib;" + $env:path
pip install pycuda
cuDNN 1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive.
In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download CUDA 11.x cuDNN 8.2
2. Extract the zip file and set the environment variables

cd \the\path\of\cudnn\zip\file
Expand-Archive cudnn-11.3-windows-x64-v8.2.1.32.zip .
$env:CUDNN_DIR="$pwd\cuda"
$env:path = "$env:CUDNN_DIR\bin;" + $env:path
PPL.NN ppl.nn TODO
OpenVINO openvino TODO
ncnn ncnn TODO

Build MMDeploy

cd \the\root\path\of\MMDeploy
$env:MMDEPLOY_DIR="$pwd"

Build Options Spec

NAME VALUE DEFAULT REMARK
MMDEPLOY_BUILD_SDK {ON, OFF} OFF Switch to build MMDeploy SDK
MMDEPLOY_BUILD_SDK_PYTHON_API {ON, OFF} OFF switch to build MMDeploy SDK python package
MMDEPLOY_BUILD_TEST {ON, OFF} OFF Switch to build MMDeploy SDK unittest cases
MMDEPLOY_TARGET_DEVICES {"cpu", "cuda"} cpu Enable target device. You can enable more by passing a semicolon separated list of device names to MMDEPLOY_TARGET_DEVICES variable, e.g. -DMMDEPLOY_TARGET_DEVICES="cpu;cuda"
MMDEPLOY_TARGET_BACKENDS {"trt", "ort", "pplnn", "ncnn", "openvino"} N/A Enabling inference engine. By default, no target inference engine is set, since it highly depends on the use case. When more than one engine are specified, it has to be set with a semicolon separated list of inference backend names, e.g.
-DMMDEPLOY_TARGET_BACKENDS="trt;ort;pplnn;ncnn;openvino"
After specifying the inference engine, it's package path has to be passed to cmake as follows,
1. trt: TensorRT. TENSORRT_DIR and CUDNN_DIR are needed.

-DTENSORRT_DIR=$env:TENSORRT_DIR
-DCUDNN_DIR=$env:CUDNN_DIR
2. ort: ONNXRuntime. ONNXRUNTIME_DIR is needed.
-DONNXRUNTIME_DIR=$env:ONNXRUNTIME_DIR
3. pplnn: PPL.NN. pplnn_DIR is needed. MMDeploy hasn't verified it yet. 4. ncnn: ncnn. ncnn_DIR is needed. MMDeploy hasn't verified it yet. 5. openvino: OpenVINO. InferenceEngine_DIR is needed. MMDeploy hasn't verified it yet.
MMDEPLOY_CODEBASES {"mmcls", "mmdet", "mmseg", "mmedit", "mmocr", "all"} all Enable codebase's postprocess modules. You can provide a semicolon separated list of codebase names to enable them. Or you can pass all to enable them all, i.e., -DMMDEPLOY_CODEBASES=all
MMDEPLOY_SHARED_LIBS {ON, OFF} ON Switch to build shared library or static library of MMDeploy SDK

Build Model Converter

Build Custom Ops

If one of inference engines among ONNXRuntime, TensorRT and ncnn is selected, you have to build the corresponding custom ops.

  • ONNXRuntime Custom Ops
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="ort" -DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
  • TensorRT Custom Ops
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="trt" -DTENSORRT_DIR="$env:TENSORRT_DIR" -DCUDNN_DIR="$env:CUDNN_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
  • ncnn Custom Ops

    TODO

Install Model Converter
cd $env:MMDEPLOY_DIR
pip install -e .

Note

  • Some dependencies are optional. Simply running pip install -e . will only install the minimum runtime requirements. To use optional dependencies, install them manually with pip install -r requirements/optional.txt or specify desired extras when calling pip (e.g. pip install -e .[optional]). Valid keys for the extras field are: all, tests, build, optional.

Build SDK

MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively. You can also activate other engines after the model.

  • cpu + ONNXRuntime

    cd $env:MMDEPLOY_DIR
    mkdir build -ErrorAction SilentlyContinue
    cd build
    cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
        -DMMDEPLOY_BUILD_SDK=ON `
        -DMMDEPLOY_TARGET_DEVICES="cpu" `
        -DMMDEPLOY_TARGET_BACKENDS="ort" `
        -DMMDEPLOY_CODEBASES="all" `
        -DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
    
    cmake --build . --config Release -- /m
    cmake --install . --config Release
  • cuda + TensorRT

    cd $env:MMDEPLOY_DIR
    mkdir build -ErrorAction SilentlyContinue
    cd build
    cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
      -DMMDEPLOY_BUILD_SDK=ON `
      -DMMDEPLOY_TARGET_DEVICES="cuda" `
      -DMMDEPLOY_TARGET_BACKENDS="trt" `
      -DMMDEPLOY_CODEBASES="all" `
      -Dpplcv_DIR="$env:PPLCV_DIR/pplcv-build/install/lib/cmake/ppl" `
      -DTENSORRT_DIR="$env:TENSORRT_DIR" `
      -DCUDNN_DIR="$env:CUDNN_DIR"
    
    cmake --build . --config Release -- /m
    cmake --install . --config Release

Build Demo

cd $env:MMDEPLOY_DIR\build\install\example
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
  -DMMDeploy_DIR="$env:MMDEPLOY_DIR/build/install/lib/cmake/MMDeploy"

cmake --build . --config Release -- /m

$env:path = "$env:MMDEPLOY_DIR/build/install/bin;" + $env:path

Note

  1. Release / Debug libraries can not be mixed. If MMDeploy is built with Release mode, all its dependent thirdparty libraries have to be built in Release mode too and vice versa.