-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Which version of TensorRT is usable for converting yolov5 model to tensorrt model and running it on docker container? #8480
Comments
👋 Hello @mcagricaliskan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
@mcagricaliskan Docker image comes with TensorRT preinstalled. All you need to do is export and then run any of the usage examples. If you run in an environment without TRT then YOLOv5 will attempt to autoinstall. Latest version is 8 but TRT 7 may also work for some use cases. !python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0 # export
!python detect.py --weights yolov5s.engine --imgsz 640 --device 0 # inference |
@glenn-jocher Can you share your Dockerfile or which docker image you use as base? Because i tried this with
with this way i got same error |
@mcagricaliskan you might need to update your Docker image:
Dockerfile is here: yolov5/utils/docker/Dockerfile Lines 1 to 33 in fdc9d91
|
@mcagricaliskan thanks for the screenshots. I'll add a TODO to reproduce and debug this. |
TODO: Investigate possible Docker TRT export bug |
@glenn-jocher I am trying with Do i need to add special arg for solving these error? Also not work with results.print() too |
|
@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues. |
@mcagricaliskan detect.py also works correctly. Removing TODO. We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem. How to create a Minimal, Reproducible ExampleWhen asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem. Thank you! 😃 |
@glenn-jocher Your pytorch version is 1.11.0 but it is 1.13 in latest version of
I tried on 3 different pc with nvidia-docker2 installation and all 3 fails also they have different gpus (3060, 3060ti, 2080super) |
also pytorch 10 works well, but pytorch 12-13 not working |
I also encountered this problem. I think it's the pytorch version |
@mcagricaliskan yes it looks like you are correct, TRT export in Docker is broken. It appears to be PyTorch issue related, downgrading appears to resolve the issue. I'm not sure what other solution there is unfortunately. EDIT: TODO: TRT export in Docker crashed due to torch 1.13 |
I am also facing the same error when I do trtexec for conversion from onnx to trt.Let me know how to resolve for version in which the error occurs |
@anjineyulutv i will investigate the issue further and get back to you with a solution for the version in which the error occurs. Thank you for your patience. |
Search before asking
Question
Hello Everyone,
These days i am trying to run Yolov5 with TensorRT on Docker.
I didn't install TensorRT to my ubuntu because i want to run yolov5 on docker. I tried to use nvcr.io/nvidia/pytorch:22.06-py3 and ultralytics/yolov5 base images. I succesfly runned yolov5m on docker container. but i need more performance because i want to run huge number of video feed with yolov5 for these reason i decied trying to reach TensorRT yolov5 speed.
My graphic card is: RTX 3060
My Question:
Which TensorRT version is correct for converting yolov5 model to tensorrt model and running it on docker container?
Why am i asking these:
For using tensorRT i tryed to convert yolo model to tensorRt model. I used standart scripts from THIS COLAB codes on my docker container. When i tried I got same error everytime which is:
TensortRT version of container: TensorRT 8.2.5.1...
used this commands
After my research i tried to run this same commands on google colab. I used YOLOv5 Tutorial and it worked and i got a yolov5m.engine file
TensortRT version on Colab: TensorRT 8.4.1.5...
I thought I had succeeded. And I got the following result from here, the reason why I can't translate is related to the tensorrt version. Please correct me if I am wrong.
But when i tried to run yolov5m.engine model on yolov5 container i get some error about engine version:
NVIDIA GeForce RTX 3060
The result I have deduced from here is that I cannot run the yolov5.engine model with a lower version, since I perform the translation operations with a higher tensorRT version. Please correct me if I am wrong.
For now nvcr.io/nvidia/pytorch:XX.XX-py3 and ultralytics/yolov5 images dosen't have tensorrt version which higher than 8.2.5.1.
So for these reasons i thought i need to find correct vesion for converting and deploying. or any other way to run tensorrt. Please correct me if I am wrong.
Additional
No response
The text was updated successfully, but these errors were encountered: