Skip to content
This repository has been archived by the owner on Aug 17, 2024. It is now read-only.
/ unimatch_onnx Public archive

ONNX and TensorRT inference demo for Unimatch

License

Notifications You must be signed in to change notification settings

fateshelled/unimatch_onnx

Repository files navigation

unimatch_onnx

ONNX or TensorRT inference demo for Unimatch (Unifying Flow, Stereo and Depth Estimation).

Requirements

ONNX model

  • OpenCV
  • numpy
  • onnxruntime

※ tested onnxruntime==1.13.1

TensorRT model

  • OpenCV
  • numpy
  • TensorRT
  • pycuda

※ tested TensorRT==8.5.2.2

Model Download

Google Drive

Usage

ONNX model

Stereo Model

usage: demo_stereo_onnx.py [-h] [-m MODEL_PATH] [-l LEFT_IMAGE] [-r RIGHT_IMAGE] [-o OUTPUT_PATH]

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL_PATH, --model_path MODEL_PATH
                        ONNX model file path. (default: unimatch_stereo_scale1_1x3x480x640_sim.onnx)
  -l LEFT_IMAGE, --left_image LEFT_IMAGE
                        input left image. (default: data/left.png)
  -r RIGHT_IMAGE, --right_image RIGHT_IMAGE
                        input right image. (default: data/right.png)
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        output colored disparity image paht. (default: output.png)

Opticalflow Model

usage: demo_flow_onnx.py [-h] [-m MODEL_PATH] [-i1 IMAGE1] [-i2 IMAGE2] [-o OUTPUT_PATH]

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL_PATH, --model_path MODEL_PATH
                        ONNX model file path. (default: gmflow-scale1-mixdata-train320x576-4c3a6e9a_1x3x480x640_sim.onnx)
  -i1 IMAGE1, --image1 IMAGE1
                        input image1. (default: data/flow/frame1.png)
  -i2 IMAGE2, --image2 IMAGE2
                        input image2. (default: data/flow/frame2.png)
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        output colored disparity image paht. (default: output.png)

TensorRT

Stereo Model

  • Before using the TensorRT demo, you will need to convert the onnx model file to an engine file for your GPU.
bash convert_onnx2trt.bash <onnx-model-path> <output-engine-path>
usage: demo_stereo_trt.py [-h] [-e ENGINE_PATH] [-ih INPUT_HEIGHT] [-iw INPUT_WIDTH] [-l LEFT_IMAGE] [-r RIGHT_IMAGE]
                          [-o OUTPUT_PATH]

optional arguments:
  -h, --help            show this help message and exit
  -e ENGINE_PATH, --engine_path ENGINE_PATH
                        TensorRT engine file path. (default: unimatch_stereo_scale1_1x3x480x640_sim.trt)
  -ih INPUT_HEIGHT, --input_height INPUT_HEIGHT
                        Model input height. (default: 480)
  -iw INPUT_WIDTH, --input_width INPUT_WIDTH
                        Model input width. (default: 640)
  -l LEFT_IMAGE, --left_image LEFT_IMAGE
                        input left image. (default: data/left.png)
  -r RIGHT_IMAGE, --right_image RIGHT_IMAGE
                        input right image. (default: data/right.png)
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        output colored disparity image paht. (default: output.png)