Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add the yolov5 gpu preprocess #395

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file modified .clang-format
100644 → 100755
Empty file.
Empty file modified .gitattributes
100644 → 100755
Empty file.
Empty file modified .github/ISSUE_TEMPLATE/bug-report.yml
100644 → 100755
Empty file.
Empty file modified .github/ISSUE_TEMPLATE/config.yml
100644 → 100755
Empty file.
Empty file modified .github/ISSUE_TEMPLATE/documentation.yml
100644 → 100755
Empty file.
Empty file modified .github/ISSUE_TEMPLATE/feature-request.yml
100644 → 100755
Empty file.
Empty file modified .github/dependabot.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/ci-test.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/code-format.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/codeql-analysis.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/gh-pages.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/pypi-release.yml
100644 → 100755
Empty file.
Empty file modified .gitignore
100644 → 100755
Empty file.
Empty file modified .pre-commit-config.yaml
100644 → 100755
Empty file.
Empty file modified LICENSE
100644 → 100755
Empty file.
Empty file modified deployment/libtorch/CMakeLists.txt
100644 → 100755
Empty file.
Empty file modified deployment/libtorch/cmdline.h
100644 → 100755
Empty file.
Empty file modified deployment/libtorch/main.cpp
100644 → 100755
Empty file.
Empty file modified deployment/ncnn/CMakeLists.txt
100644 → 100755
Empty file.
Empty file modified deployment/ncnn/main.cpp
100644 → 100755
Empty file.
Empty file modified deployment/ncnn/yolort-opt.param
100644 → 100755
Empty file.
Empty file modified deployment/onnxruntime/CMakeLists.txt
100644 → 100755
Empty file.
Empty file modified deployment/onnxruntime/cmdline.h
100644 → 100755
Empty file.
Empty file modified deployment/onnxruntime/main.cpp
100644 → 100755
Empty file.
8 changes: 7 additions & 1 deletion deployment/tensorrt/CMakeLists.txt
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@ option(TENSORRT_DIR "Path to built TensorRT directory." STRING)
message(STATUS "TENSORRT_DIR: ${TENSORRT_DIR}")

find_package(OpenCV REQUIRED)
# build the library of GPU preprocess
add_subdirectory(preprocess)
include_directories(./preprocess)
# set the link library
message(${CMAKE_BINARY_DIR}/preprocess)
link_directories(${CMAKE_BINARY_DIR}/preprocess)

if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
add_compile_options(-Wall)
Expand Down Expand Up @@ -39,4 +45,4 @@ include_directories(${TENSORRT_DIR}/include)
link_directories(${TENSORRT_DIR}/lib)

add_executable(${PROJECT_NAME} main.cpp)
target_link_libraries(${PROJECT_NAME} ${OpenCV_LIBS} nvinfer cudart nvonnxparser nvinfer_plugin)
target_link_libraries(${PROJECT_NAME} preprocess ${OpenCV_LIBS} nvinfer cudart nvonnxparser nvinfer_plugin)
19 changes: 19 additions & 0 deletions deployment/tensorrt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ The TensorRT inference example of `yolort`.

Here we will mainly discuss how to use the C++ interface, we recommend that you check out our [tutorial](https://zhiqwang.com/yolov5-rt-stack/notebooks/onnx-graphsurgeon-inference-tensorrt.html) first.

Gpu preprocess reference code :[tensorrtx](https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5)

1. Export your custom model to TensorRT format

We provide a CLI tool to export the custom model checkpoint trained from yolov5 to TensorRT serialized engine.
Expand All @@ -21,6 +23,12 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you
python tools/export_model.py --checkpoint_path {path/to/your/best.pt} --include engine
```

if you wanna inference in the mulitple batch just like 8 use this command

```bash
python tools/export_model.py --checkpoint_path {path/to/your/best.pt} --include engine --batch_size 8
```

Note: This CLI will output a pair of ONNX model and TensorRT serialized engine if you have the full TensorRT's Python environment, otherwise it will only output an ONNX models with suffixes ".trt.onnx". And then you can also use the [`trtexct`](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec) provided by TensorRT to export the serialized engine as below:

```bash
Expand Down Expand Up @@ -62,13 +70,24 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you

- cudnn_cnn_infer64_8.dll, cudnn_ops_infer64_8.dll, cudnn64_8.dll, nvinfer.dll, nvinfer_plugin.dll, nvonnxparser.dll, zlibwapi.dll (On which CUDA and cudnn depend)
- opencv_corexxx.dll opencv_imgcodecsxxx.dll opencv_imgprocxxx.dll (Subsequent dependencies by OpenCV or you can also use Static OpenCV Library)
- Caculate time use the **sys/time.h** you should don;t use in the windows ,otherwise you will complie failed .

1. Now, you can infer your own images.

```bash
./yolort_trt --image {path/to/your/image}
--model_path {path/to/your/serialized/tensorrt/engine}
--class_names {path/to/your/class/names}
--batch 1
```

if you wanna inference in 8 batch,you should use images_folder

```bash
./yolort_trt --images_folder {path/to/your/imagefolder}
--model_path {path/to/your/serialized/tensorrt/engine}
--class_names {path/to/your/class/names}
--batch 8
```

The above `yolort_trt` will determine if it needs to build the serialized engine file from ONNX based on the file suffix, and only do serialization when the argument `--model_path` given are with `.onnx` suffixes, all other suffixes are treated as the TensorRT serialized engine.
Empty file modified deployment/tensorrt/cmdline.h
100644 → 100755
Empty file.
Loading