Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which version of TensorRT is usable for converting yolov5 model to tensorrt model and running it on docker container? #8480

Open
1 task done
mcagricaliskan opened this issue Jul 5, 2022 · 18 comments
Labels
question Further information is requested TODO

Comments

@mcagricaliskan
Copy link

Search before asking

Question

Hello Everyone,

These days i am trying to run Yolov5 with TensorRT on Docker.

I didn't install TensorRT to my ubuntu because i want to run yolov5 on docker. I tried to use nvcr.io/nvidia/pytorch:22.06-py3 and ultralytics/yolov5 base images. I succesfly runned yolov5m on docker container. but i need more performance because i want to run huge number of video feed with yolov5 for these reason i decied trying to reach TensorRT yolov5 speed.

My graphic card is: RTX 3060

My Question:
Which TensorRT version is correct for converting yolov5 model to tensorrt model and running it on docker container?

Why am i asking these:
For using tensorRT i tryed to convert yolo model to tensorRt model. I used standart scripts from THIS COLAB codes on my docker container. When i tried I got same error everytime which is:

[07/05/2022-12:43:40] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:368: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:791: While parsing node number 203 [Resize -> "onnx::Concat_370"]:
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:792: --- Begin node ---
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:793: input: "onnx::Resize_365"
input: ""
input: "onnx::Resize_607"
output: "onnx::Concat_370"
name: "Resize_203"
op_type: "Resize"
attribute {
  name: "coordinate_transformation_mode"
  s: "asymmetric"
  type: STRING
}
attribute {
  name: "cubic_coeff_a"
  f: -0.75
  type: FLOAT
}
attribute {
  name: "mode"
  s: "nearest"
  type: STRING
}
attribute {
  name: "nearest_mode"
  s: "floor"
  type: STRING
}

[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:794: --- End node ---
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:796: ERROR: parsers/onnx/builtin_op_importers.cpp:3526 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"

TensorRT: export failure: failed to load ONNX file: yolov5m.onnx

TensortRT version of container: TensorRT 8.2.5.1...

used this commands


python export.py --weights yolov5m.pt --include onnx
python export.py --weights yolov5m.pt --include engine --imgsz 640 640 --device 0

After my research i tried to run this same commands on google colab. I used YOLOv5 Tutorial and it worked and i got a yolov5m.engine file

TensortRT version on Colab: TensorRT 8.4.1.5...

I thought I had succeeded. And I got the following result from here, the reason why I can't translate is related to the tensorrt version. Please correct me if I am wrong.

But when i tried to run yolov5m.engine model on yolov5 container i get some error about engine version:
NVIDIA GeForce RTX 3060

YOLOv5 🚀 v6.1-277-gfdc9d91 Python-3.8.13 torch-1.13.0a0+340c412 CUDA:0 (NVIDIA GeForce RTX 3060, 12046MiB)

Loading model/yolov5m.engine for TensorRT inference...

[07/05/2022-12:55:51] [TRT] [I] [MemUsageChange] Init CUDA: CPU +472, GPU +0, now: CPU 568, GPU 784 (MiB)
[07/05/2022-12:55:51] [TRT] [I] Loaded engine size: 84 MiB
[07/05/2022-12:55:51] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)
[07/05/2022-12:55:51] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
Rise Model Excetion: 'NoneType' object has no attribute 'num_bindings'. Cache may be out of date, try `force_reload=True` or see https://github.com/ultralytics/yolov5/issues/36 for help.

The result I have deduced from here is that I cannot run the yolov5.engine model with a lower version, since I perform the translation operations with a higher tensorRT version. Please correct me if I am wrong.

For now nvcr.io/nvidia/pytorch:XX.XX-py3 and ultralytics/yolov5 images dosen't have tensorrt version which higher than 8.2.5.1.
So for these reasons i thought i need to find correct vesion for converting and deploying. or any other way to run tensorrt. Please correct me if I am wrong.

Additional

No response

@mcagricaliskan mcagricaliskan added the question Further information is requested label Jul 5, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jul 5, 2022

👋 Hello @mcagricaliskan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 5, 2022

@mcagricaliskan Docker image comes with TensorRT preinstalled. All you need to do is export and then run any of the usage examples.

If you run in an environment without TRT then YOLOv5 will attempt to autoinstall. Latest version is 8 but TRT 7 may also work for some use cases.

!python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0  # export
!python detect.py --weights yolov5s.engine --imgsz 640 --device 0  # inference

@glenn-jocher
Copy link
Member

Screen Shot 2022-07-05 at 3 38 48 PM

@mcagricaliskan
Copy link
Author

@glenn-jocher Can you share your Dockerfile or which docker image you use as base? Because i tried this with ultralytics/yolov5:latest and it is not worked.

docker run --gpus all -it --rm --ipc=host ultralytics/yolov5:latest`
python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0

with this way i got same error

@glenn-jocher
Copy link
Member

@mcagricaliskan you might need to update your Docker image:

  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Dockerfile is here:

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Builds ultralytics/yolov5:latest image on DockerHub https://hub.docker.com/r/ultralytics/yolov5
# Image is CUDA-optimized for YOLOv5 single/multi-GPU training and inference
# Start FROM NVIDIA PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
FROM nvcr.io/nvidia/pytorch:22.05-py3
RUN rm -rf /opt/pytorch # remove 1.2GB dir
# Downloads to user config dir
ADD https://ultralytics.com/assets/Arial.ttf https://ultralytics.com/assets/Arial.Unicode.ttf /root/.config/Ultralytics/
# Install linux packages
RUN apt update && apt install --no-install-recommends -y zip htop screen libgl1-mesa-glx
# Install pip packages
COPY requirements.txt .
RUN python -m pip install --upgrade pip
RUN pip uninstall -y torch torchvision torchtext Pillow
RUN pip install --no-cache -r requirements.txt albumentations wandb gsutil notebook Pillow>=9.1.0 \
'opencv-python<4.6.0.66' \
--extra-index-url https://download.pytorch.org/whl/cu113
# Create working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy contents
COPY . /usr/src/app
RUN git clone https://github.com/ultralytics/yolov5 /usr/src/yolov5
# Set environment variables
ENV OMP_NUM_THREADS=8

@mcagricaliskan
Copy link
Author

image

image

not worked.

nvcr.io/nvidia/pytorch:21.06-py3 is worked fine for export, i will try to run model with it and it has trt 7.2.3.4

but why latest version of ultralytics/yolov5:latest not working on my ws?

@glenn-jocher
Copy link
Member

@mcagricaliskan thanks for the screenshots. I'll add a TODO to reproduce and debug this.

@glenn-jocher
Copy link
Member

TODO: Investigate possible Docker TRT export bug

@mcagricaliskan
Copy link
Author

mcagricaliskan commented Jul 5, 2022

@glenn-jocher I am trying with nvcr.io/nvidia/pytorch:21.06-py3, export is OK but detect not works.

image

Do i need to add special arg for solving these error?

Also not work with results.print() too

image

@mcagricaliskan
Copy link
Author

mcagricaliskan commented Jul 6, 2022

nvcr.io/nvidia/pytorch:21.11-py3 works fine

@glenn-jocher
Copy link
Member

@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues.

Screenshot 2022-07-07 at 14 22 03

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 7, 2022

@mcagricaliskan detect.py also works correctly. Removing TODO.

Screenshot 2022-07-07 at 14 26 10

We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible to produce the problem
  • Complete – Provide all parts someone else needs to reproduce the problem
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

For Ultralytics to provide assistance your code should also be:

  • Current – Verify that your code is up-to-date with GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been solved in master.
  • Unmodified – Your problem must be reproducible using official YOLOv5 code without changes. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@glenn-jocher glenn-jocher removed the TODO label Jul 7, 2022
@mcagricaliskan
Copy link
Author

mcagricaliskan commented Jul 18, 2022

@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues.

Screenshot 2022-07-07 at 14 22 03

@glenn-jocher Your pytorch version is 1.11.0 but it is 1.13 in latest version of ultralytics/yolov5:latest

nvcr.io/nvidia/pytorch:21.11-py3 works because it is contains pytorch 1.11

I tried on 3 different pc with nvidia-docker2 installation and all 3 fails also they have different gpus (3060, 3060ti, 2080super)

image
image
image

@mcagricaliskan
Copy link
Author

also pytorch 10 works well, but pytorch 12-13 not working
@glenn-jocher i think it is about pytorch version, can you check it?

@zhuya1996
Copy link

I also encountered this problem. I think it's the pytorch version

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 23, 2022

@mcagricaliskan yes it looks like you are correct, TRT export in Docker is broken. It appears to be PyTorch issue related, downgrading appears to resolve the issue. I'm not sure what other solution there is unfortunately.

EDIT: TODO: TRT export in Docker crashed due to torch 1.13

@anjineyulutv
Copy link

I am also facing the same error when I do trtexec for conversion from onnx to trt.Let me know how to resolve for version in which the error occurs

@glenn-jocher
Copy link
Member

@anjineyulutv i will investigate the issue further and get back to you with a solution for the version in which the error occurs. Thank you for your patience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested TODO
Projects
None yet
Development

No branches or pull requests

4 participants