NVIDIA Jetson Nano Deployment - Ultralytics YOLOv8 Docs #2475
Replies: 11 comments 33 replies
-
How to make INT8 engine for yolov8 model with imgsz=3040, for use in deepstream python app on jetson? And how much does the accuracy drop when switching from Fp32 to INT8? |
Beta Was this translation helpful? Give feedback.
-
Hello, I am trying to follow the tutorial, with a new jetson nano and the SDK developer kit and I got this error when installing the requirements pip3 install -r requirements.txt WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. |
Beta Was this translation helpful? Give feedback.
-
I got to the DeepStream configuration, cloned the repository but the gen_wts_yoloV5.py file was not found needed by the next step. |
Beta Was this translation helpful? Give feedback.
-
Hi, good tutorial and it works, but how do I take that further? I want to use it in a python app not deepstream-app. The results from the inference needs to be picked up but how? This is good for benchmarking but I dont know which tutorial online I should trust and spend hours on trying to figure out how to get the output, box/class and confidence into python program. Any ideas or finger-pointers? regards Magnus |
Beta Was this translation helpful? Give feedback.
-
Hii please help after cloning git clone https://github.com/marcoslucianops/DeepStream-Yolo the gen_wts_yoloV5.py doesnt exist any where on this repo it has export_yolov5.py which just exports an onnx model without creating the weights and cfg file was this intentional if so how do I create the cfr and weights file also the onnx that export_yolov5 creates is int64 instead of int32 |
Beta Was this translation helpful? Give feedback.
-
Steps for successful setup for custom model deployment on NVIDIA Jetson Nano:
Edit the following lines. Here you need to press i first to enter editing mode. Press ESC, then type :wq to save and quit torch>=1.8.0torchvision>=0.9.0Note: torch and torchvision are excluded for now because they will be installed later.
...
...
|
Beta Was this translation helpful? Give feedback.
-
Hey, I am getting ~5 FPS on Jetson Nano with FP16 quantization. 🫤 I would be thankful if someone can share the benchmark results for the same with YoloV5 on Jetson Nano |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
The tensor acceleration and the speed of recognizing images are quite amazing, I would like to use the tensor to accelerate the process of detecting vehicles in real time (real-time video) in Jetson Nano, but now I can't find any information about it, how should I start to carry out my work? |
Beta Was this translation helpful? Give feedback.
-
Is there a config_infer_secondary_yolov8 for the yolov8_cls deep stream integration ? |
Beta Was this translation helpful? Give feedback.
-
can you explain a bit more.Actually i have build
engine(yolov8.eninge fike)(and used tensorrt into it) now i want to do
inference using this for yolov8 for 3 different ways (i)a single
picture(ii)a batch of pictures (iii) using camera.Can you please help me
with this?
…On Mon, 8 Jul 2024 at 17:32, Paula Derrenger ***@***.***> wrote:
Hi @MadhavKrishna1 <https://github.com/MadhavKrishna1>,
Certainly! Below is an example of how you can integrate the TensorRT
engine with a DeepStream Python app. This example assumes you have already
created the TensorRT engine as described in the previous steps.
1.
*Install DeepStream Python Bindings*: Ensure you have the DeepStream
Python bindings installed. You can follow the installation instructions
from the DeepStream Python Apps
<https://github.com/NVIDIA-AI-IOT/deepstream_python_apps> repository.
2.
*DeepStream Python App Integration*: Here is a basic example of how to
load the TensorRT engine and use it for inference within a DeepStream
pipeline.
import pydsimport gigi.require_version('Gst', '1.0')from gi.repository import Gst, GObject
# Initialize GStreamerGst.init(None)
# Create the pipelinepipeline = Gst.Pipeline()
# Create elementssource = Gst.ElementFactory.make('filesrc', 'file-source')h264parser = Gst.ElementFactory.make('h264parse', 'h264-parser')decoder = Gst.ElementFactory.make('nvv4l2decoder', 'nvv4l2-decoder')streammux = Gst.ElementFactory.make('nvstreammux', 'Stream-muxer')pgie = Gst.ElementFactory.make('nvinfer', 'primary-inference')nvvidconv = Gst.ElementFactory.make('nvvideoconvert', 'nvvideo-converter')nvosd = Gst.ElementFactory.make('nvdsosd', 'nv-onscreendisplay')sink = Gst.ElementFactory.make('nveglglessink', 'nvvideo-renderer')
# Set propertiessource.set_property('location', 'path/to/your/video.mp4')streammux.set_property('width', 3040)streammux.set_property('height', 3040)streammux.set_property('batch-size', 1)streammux.set_property('batched-push-timeout', 4000000)pgie.set_property('config-file-path', 'path/to/config_infer_primary_yoloV8.txt')
# Add elements to the pipelinepipeline.add(source)pipeline.add(h264parser)pipeline.add(decoder)pipeline.add(streammux)pipeline.add(pgie)pipeline.add(nvvidconv)pipeline.add(nvosd)pipeline.add(sink)
# Link the elementssource.link(h264parser)h264parser.link(decoder)
# Link the decoder to the streammuxsinkpad = streammux.get_request_pad('sink_0')srcpad = decoder.get_static_pad('src')srcpad.link(sinkpad)
# Link the remaining elementsstreammux.link(pgie)pgie.link(nvvidconv)nvvidconv.link(nvosd)nvosd.link(sink)
# Create an event loop and feed GStreamer bus messages to itloop = GObject.MainLoop()bus = pipeline.get_bus()bus.add_signal_watch()bus.connect('message', bus_call, loop)
# Start the pipelinepipeline.set_state(Gst.State.PLAYING)
try:
loop.run()except:
pass
# Cleanuppipeline.set_state(Gst.State.NULL)
This script sets up a basic DeepStream pipeline that reads a video file,
decodes it, performs inference using the TensorRT engine, and displays the
results. Make sure to adjust the paths and properties according to your
setup.
For more detailed information, you can refer to the DeepStream SDK
documentation
<https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html>.
If you encounter any issues, please ensure you are using the latest
versions of the packages and provide a reproducible example if the problem
persists. You can find more information on creating a minimum reproducible
example here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Happy coding! 😊
—
Reply to this email directly, view it on GitHub
<#2475 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BCVBHHXBIKA6RYBRSAV436TZLJ5VBAVCNFSM6AAAAAAX2BWOU6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TSOBWGY4TG>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
NVIDIA Jetson Nano Deployment - Ultralytics YOLOv8 Docs
📚 This guide explains how to deploy a trained model into NVIDIA Jetson Platform and perform inference using TensorRT and DeepStream SDK. Here we use TensorRT to maximize the inference performance on the Jetson platform.
UPDATED 18 November 2022.
https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/
Beta Was this translation helpful? Give feedback.
All reactions