Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Engine val to validate the trt engine file #5667

Closed
wants to merge 6 commits into from

Conversation

shihanyu
Copy link

@shihanyu shihanyu commented Nov 16, 2021

modified the val.py mainly so that it can validate the tensorrt file.
$ python path/to/val.py --data xxxxxx.yaml --img 640 --engine_library xxxx.so --engine_path xxxx.engine

(The last code I pushed failed because I forget to add pycuda in the requirements.txt.
I think this time it may work. Hope it can pass.)

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhanced YOLOv5 with TensorRT optimization support for faster inference.

📊 Key Changes

  • Added tensorrt and pycuda to the requirements.txt, indicating new dependencies for TensorRT operation.
  • Introduced yolov5_trt.py, a new utility to support YOLOv5 inference using NVIDIA's TensorRT.
  • Modified val.py to support validation using TensorRT optimized models.

🎯 Purpose & Impact

  • 💨 The integration of TensorRT aims to optimize inference times, resulting in faster performance particularly on NVIDIA GPUs.
  • 🔧 The changes allow users to validate and benchmark YOLOv5 models after converting them to TensorRT's optimized format.
  • 🤖 This update enables hybrid usage of PyTorch and TensorRT, enriching the deployment options for YOLOv5 and potentially broadening the framework's application scope, especially in production environments where inference speed is crucial.

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 16, 2021

@shihanyu see #5278 (comment), we can not add tensorrt specific packages to requirements.txt as the majority of users will not need these and do not want them, and may not even have CUDA installed on their system (like me, working on a MacBook).

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 16, 2021

@shihanyu also see my other comments in #5278. Any modifications we might consider need to be entirely contained within DetectMultiBackend() class.

Lastly, the amount of lines here is far too many. In my DetectMultiBackend PR #5549 I only added 35 lines of code to the repo while adding val and detect capability for a multitude of formats (TorchScript, CoreML, ONNX, TensorFlow, TensorFlow Lite etc.). In contrast this PR is proposing +450 lines of code to simply add support for 1 additional format.

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 22, 2021

@shihanyu closing this PR to focus on #5699. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants