Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT inference with C++ for yolov7 #8599

Open
linghu8812 opened this issue Jul 12, 2022 · 10 comments
Open

TensorRT inference with C++ for yolov7 #8599

linghu8812 opened this issue Jul 12, 2022 · 10 comments
Labels

Comments

@linghu8812
Copy link

linghu8812 commented Jul 12, 2022

Hello every one, the repo which support yolov4: #7002, scaled yolov4: WongKinYiu/ScaledYOLOv4#56, yolov5: ultralytics/yolov5#1597, and yolov6: meituan/YOLOv6#122 TensorRT inference with C++ is also support yolov7 inference, all the yolov7 pretrained model can be convert to onnx model and then to tensorrt engine.

1.Export ONNX Model

Use the following command to export onnx model:
first download yolov7 models to folder weights,

git clone https://github.com/linghu8812/yolov7.git
cd yolov7
python export.py --weights ./weights/yolov7.pt --simplify --grid 

if you want to export onnx model with 1280 image size add --img-size in command:

python export.py --weights ./weights/yolov7-w6.pt --simplify --grid --img-size 1280

2.uild yolov7_trt Project

mkdir build && cd build
cmake ..
make -j

3.Run yolov7_trt

  • inference with yolov7
./yolov7_trt ../config.yaml ../samples

4.Results:

image

WongKinYiu/yolov7#95

@linghu8812 linghu8812 added the Feature-request Any feature-request label Jul 12, 2022
@h030162
Copy link

h030162 commented Jul 12, 2022

where is the "yolov7_trt Project"?

@linghu8812
Copy link
Author

where is the "yolov7_trt Project"?

https://github.com/linghu8812/tensorrt_inference/tree/master/yolov7

@h030162
Copy link

h030162 commented Jul 12, 2022

[07/12/2022-03:14:01] [I] [TRT] [MemUsageChange] Init CUDA: CPU +159, GPU +0, now: CPU 165, GPU 127 (MiB)
[07/12/2022-03:14:01] [I] [TRT] ----------------------------------------------------------------
[07/12/2022-03:14:01] [I] [TRT] Input filename: ../yolov7.onnx
[07/12/2022-03:14:01] [I] [TRT] ONNX IR version: 0.0.6
[07/12/2022-03:14:01] [I] [TRT] Opset version: 12
[07/12/2022-03:14:01] [I] [TRT] Producer name: pytorch
[07/12/2022-03:14:01] [I] [TRT] Producer version: 1.10
[07/12/2022-03:14:01] [I] [TRT] Domain:
[07/12/2022-03:14:01] [I] [TRT] Model version: 0
[07/12/2022-03:14:01] [I] [TRT] Doc string:
[07/12/2022-03:14:01] [I] [TRT] ----------------------------------------------------------------
[07/12/2022-03:14:01] [W] [TRT] onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/12/2022-03:14:01] [E] [TRT] [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Mul_322: broadcast dimensions must be conformable
)
[07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:720: While parsing node number 322 [Mul -> "528"]:
[07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:721: --- Begin node ---
[07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:722: input: "525"
input: "657"
output: "528"
name: "Mul_322"
op_type: "Mul"

[07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:723: --- End node ---
[07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:725: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Mul_322
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Mul_322: broadcast dimensions must be conformable
)
[07/12/2022-03:14:01] [E] Failure while parsing ONNX file
start building engine
[07/12/2022-03:14:01] [E] [TRT] 4: [network.cpp::validate::2411] Error Code 4: Internal Error (Network must have at least one output)
build engine done
yolov7_trt: /home/hanjin/work/yolov7/yolov7/tensorrt_inference/yolov7/../includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed.
Aborted (core dumped)

what's your GPU and tensorrt version?
My gpu is 1060 and tensorrt with version 8.0.1.6

@linghu8812
Copy link
Author

[07/12/2022-03:14:01] [I] [TRT] [MemUsageChange] Init CUDA: CPU +159, GPU +0, now: CPU 165, GPU 127 (MiB) [07/12/2022-03:14:01] [I] [TRT] ---------------------------------------------------------------- [07/12/2022-03:14:01] [I] [TRT] Input filename: ../yolov7.onnx [07/12/2022-03:14:01] [I] [TRT] ONNX IR version: 0.0.6 [07/12/2022-03:14:01] [I] [TRT] Opset version: 12 [07/12/2022-03:14:01] [I] [TRT] Producer name: pytorch [07/12/2022-03:14:01] [I] [TRT] Producer version: 1.10 [07/12/2022-03:14:01] [I] [TRT] Domain: [07/12/2022-03:14:01] [I] [TRT] Model version: 0 [07/12/2022-03:14:01] [I] [TRT] Doc string: [07/12/2022-03:14:01] [I] [TRT] ---------------------------------------------------------------- [07/12/2022-03:14:01] [W] [TRT] onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [07/12/2022-03:14:01] [E] [TRT] [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Mul_322: broadcast dimensions must be conformable ) [07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:720: While parsing node number 322 [Mul -> "528"]: [07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:721: --- Begin node --- [07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:722: input: "525" input: "657" output: "528" name: "Mul_322" op_type: "Mul"

[07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:723: --- End node --- [07/12/2022-03:14:01] [E] [TRT] ModelImporter.cpp:725: ERROR: ModelImporter.cpp:179 In function parseGraph: [6] Invalid Node - Mul_322 [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Mul_322: broadcast dimensions must be conformable ) [07/12/2022-03:14:01] [E] Failure while parsing ONNX file start building engine [07/12/2022-03:14:01] [E] [TRT] 4: [network.cpp::validate::2411] Error Code 4: Internal Error (Network must have at least one output) build engine done yolov7_trt: /home/hanjin/work/yolov7/yolov7/tensorrt_inference/yolov7/../includes/common/common.hpp:138: void onnxToTRTModel(const string&, const string&, nvinfer1::ICudaEngine*&, const int&): Assertion `engine' failed. Aborted (core dumped)

what's your GPU and tensorrt version? My gpu is 1060 and tensorrt with version 8.0.1.6

the TensorRT I used is 7.1.3.4, for TensorRT 8.0, I will consider to support it later.

@gongliuqing123
Copy link

i meet this problem when run ./yolov7_trt ../config.yaml ../samples:
loading filename from:/home/cidi/Algorithm/objectdetect/yolov7-inference/best.trt
deserialize done
yolov7_trt: /home/cidi/Algorithm/objectdetect/yolov7-inference/yolov7.cpp:56: bool yolov7::InferenceFolder(const string&): Assertion `engine->getNbBindings() == 2' failed.
Aborted (core dumped)

can you telll me how to solve? thanks

@linghu8812
Copy link
Author

i meet this problem when run ./yolov7_trt ../config.yaml ../samples: loading filename from:/home/cidi/Algorithm/objectdetect/yolov7-inference/best.trt deserialize done yolov7_trt: /home/cidi/Algorithm/objectdetect/yolov7-inference/yolov7.cpp:56: bool yolov7::InferenceFolder(const string&): Assertion `engine->getNbBindings() == 2' failed. Aborted (core dumped)

can you telll me how to solve? thanks

try to export onnx model with https://github.com/linghu8812/yolov7

@gongliuqing123
Copy link

While parsing node number 249 [Slice]:
ERROR: /home/cidi/onnx-tensorrt/builtin_op_importers.cpp:3154 In function importSlice:
[4] Assertion failed: -r <= axis && axis < r
[07/13/2022-14:12:56] [E] Failed to parse onnx file
[07/13/2022-14:12:56] [E] Parsing model failed
[07/13/2022-14:12:56] [E] Engine creation failed
[07/13/2022-14:12:56] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /home/cidi/TensorRT-7.1.3.4/bin/trtexec --explicitBatch --onnx=./best.onnx --saveEngine=best.trt --fp16

onnx to tensorrt is wrong? have you meet this problem?

@linghu8812
Copy link
Author

While parsing node number 249 [Slice]: ERROR: /home/cidi/onnx-tensorrt/builtin_op_importers.cpp:3154 In function importSlice: [4] Assertion failed: -r <= axis && axis < r [07/13/2022-14:12:56] [E] Failed to parse onnx file [07/13/2022-14:12:56] [E] Parsing model failed [07/13/2022-14:12:56] [E] Engine creation failed [07/13/2022-14:12:56] [E] Engine set up failed &&&& FAILED TensorRT.trtexec # /home/cidi/TensorRT-7.1.3.4/bin/trtexec --explicitBatch --onnx=./best.onnx --saveEngine=best.trt --fp16

onnx to tensorrt is wrong? have you meet this problem?

I use torch==1.11 and onnx==1.12 to export ONNX model

@gongliuqing123
Copy link

While parsing node number 249 [Slice]: ERROR: /home/cidi/onnx-tensorrt/builtin_op_importers.cpp:3154 In function importSlice: [4] Assertion failed: -r <= axis && axis < r [07/13/2022-14:12:56] [E] Failed to parse onnx file [07/13/2022-14:12:56] [E] Parsing model failed [07/13/2022-14:12:56] [E] Engine creation failed [07/13/2022-14:12:56] [E] Engine set up failed &&&& FAILED TensorRT.trtexec # /home/cidi/TensorRT-7.1.3.4/bin/trtexec --explicitBatch --onnx=./best.onnx --saveEngine=best.trt --fp16
onnx to tensorrt is wrong? have you meet this problem?

I use torch==1.11 and onnx==1.12 to export ONNX model

i got same wrong when use torch=1.11 and onnx=1.12

@6master6
Copy link

6master6 commented Aug 9, 2022

python export.py --weights ./weights/best_230.pt --simplify --grid

Converting op y.1_internal_tensor_assign_1 : _internal_op_tensor_inplace_copy
Adding op 'y.1_internal_tensor_assign_1' of type torch_tensor_assign
Adding op 'y.1_internal_tensor_assign_1_begin_0' of type const
Adding op 'y.1_internal_tensor_assign_1_end_0' of type const
Adding op 'y.1_internal_tensor_assign_1_stride_0' of type const
Adding op 'y.1_internal_tensor_assign_1_begin_mask_0' of type const
Adding op 'y.1_internal_tensor_assign_1_end_mask_0' of type const
Adding op 'y.1_internal_tensor_assign_1_squeeze_mask_0' of type const
Converting Frontend ==> MIL Ops: 93%|█▊| 1120/1209 [00:00<00:00, 1151.69 ops/s]
CoreML export failure: The updates tensor should have shape (1, 3, 80, 80, 36). Got (1, 3, 80, 80, 2)

torch == 1.11.0+cu102, onnx ==1.12.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants