-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
detect.py 30 FPS on Jetson Xavier AGX is normal? #960
Comments
Hello @AmirSa7, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook , Docker Image, and Google Cloud Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
I was able to get something like 100fps of inference on agx xavier using tensorrt. 30fps is meaningful with bare pytorch. But there is a little problem, pre and post-processing steps will add another 10ms each and are bottleneck for me |
Thanks for sharing! |
@AmirSa7 My environment details are as follows: python 3.6.9 |
@MuhammadAsadJaved have you checked your gpu status using I am planning to run yolov5s on jetson tx2 but did not try yet. I am here for the issues that I might face in the future. |
what changes you did in detect.py file? |
Seems like unable to open webcam, so please try to run on video first. with
`--source ./video.mp4` if it works then check your webcam online first if
it's working or not.
…On Sat, Nov 14, 2020 at 4:56 AM 123456789mojtaba ***@***.***> wrote:
Question
Hi,
I am working on Nvidia Jetson Xavier AGX.
While using this repository as is, latest version with yolov5s model.
When running python3 detect.py --source 0 --img-size 640 inference time
is 0.040s.
When running python3 detect.py --source 0 --img-size 256 inference time
is 0.025s.
I plan to try getting TensorRT to work on my system, and hope to get much
higher FPS, *but for now, I would be happy to know if those results seem
reasonable or am I missing something*
Additional context
OS: Ubuntu 18.04 (JetPack)
OPERATION MODE: MAXN (Maximum performance)
Python: 3.6
Pytorch: 1.6.0
CUDA: 10
Camera: 120 FPS camera (so I wish to achieve this inference FPS)
Thanks!
what changes you did in detect.py file?
when I run I use tpython3 detect.py --source 0 --img-size 640 in jetson
tx2 it shows this error
VIDEOIO ERROR: V4L: Unable to get camera FPS
what should I do?
Thanks
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#960 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AG4GR5GOMGFH2OLDHSENNFLSPWMOZANCNFSM4RKUVVOA>
.
|
i have a same error in jetson nano |
so what should I change in detec.py Model Summary: 232 layers, 7249215 parameters, 0 gradients |
when I run python3 detect.py for images and videos in works right in jetson
nano. but when I wanna use jetson nano camera to detect I write python3
detect.py --source 0 it doesn't recognize myy camera. I think I should add
some commands to detect.py file.Can you lead me?
Thank you
…On Fri, Dec 4, 2020 at 11:13 AM MiroslavDirk ***@***.***> wrote:
题
嗨,
我正在Nvidia Jetson Xavier AGX上工作。
按原样使用此存储库时,带有yolov5s模型的最新版本。
运行时,python3 detect.py --source 0 --img-size 640推理时间为0.040s。
运行时,python3 detect.py --source 0 --img-size 256推理时间为0.025s。
我计划尝试让TensorRT在我的系统上运行,并希望获得更高的FPS,*但就目前而言,我很乐意知道这些结果是否合理还是我错过了一些东西*
其他背景
操作系统:Ubuntu 18.04(JetPack)
操作模式:MAXN(最高性能)
Python:3.6
Pytorch:1.6.0
CUDA:10
摄像头:120 FPS摄像头(所以我希望实现这种推断FPS)
谢谢!
您对detect.py文件进行了哪些更改?
当我运行时,在jetson tx2中使用tpython3 detect.py --source 0 --img-size 640,它显示此错误
VIDEOIO ERROR:V4L:无法获取相机FPS
,我该怎么办?
谢谢
i have a same error in jetson nano
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#960 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AM3F722SSZ55TPAZXNC2THDSTCHKTANCNFSM4RKUVVOA>
.
|
you can compile opencv from source with gstreamer support then you can pass the pipeline string to the videocapture function for csi camera support. see https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.5.0_Jetson.sh script. You can modify the flags depending on your needs. |
No need to add any command. just make sure your camera is working. you can test camera online on any website from google "test webcam" |
Hi.
I'm sure my camera is working correctly. I checked it with another code and
webcam worked.but for detect.py not work..
…On Fri, Dec 4, 2020 at 2:04 PM Asad ***@***.***> wrote:
when I run python3 detect.py for images and videos in works right in
jetson nano. but when I wanna use jetson nano camera to detect I write
python3 detect.py --source 0 it doesn't recognize myy camera. I think I
should add some commands to detect.py file.Can you lead me? Thank you
… <#m_-6748874489086150532_>
On Fri, Dec 4, 2020 at 11:13 AM MiroslavDirk *@*.***> wrote: 题 嗨,
我正在Nvidia Jetson Xavier AGX上工作。 按原样使用此存储库时,带有yolov5s模型的最新版本。 运行时,python3
detect.py --source 0 --img-size 640推理时间为0.040s。 运行时,python3 detect.py
--source 0 --img-size 256推理时间为0.025s。 我计划尝试让TensorRT在我的系统上运行,并希望获得更高的FPS,
*但就目前而言,我很乐意知道这些结果是否合理还是我错过了一些东西* 其他背景 操作系统:Ubuntu 18.04(JetPack)
操作模式:MAXN(最高性能) Python:3.6 Pytorch:1.6.0 CUDA:10 摄像头:120
FPS摄像头(所以我希望实现这种推断FPS) 谢谢! 您对detect.py文件进行了哪些更改? 当我运行时,在jetson
tx2中使用tpython3 detect.py --source 0 --img-size 640,它显示此错误 VIDEOIO
ERROR:V4L:无法获取相机FPS ,我该怎么办? 谢谢 i have a same error in jetson nano — You are
receiving this because you commented. Reply to this email directly, view it
on GitHub <#960 (comment)
<#960 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AM3F722SSZ55TPAZXNC2THDSTCHKTANCNFSM4RKUVVOA
.
No need to add any command. just make sure your camera is working. you can
test camera online on any website from google "test webcam"
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#960 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AM3F72Z66LDBNIQTX7JDWPTSTC3KNANCNFSM4RKUVVOA>
.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@AmirSa7 Can you tell me how you installed Yolov5 on the AGX? Mine didn't go to well and I'd like to know what instructions you followed. |
@iandanielsooknanan you need to fetch a dedicated PyTorch build: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-8-0-now-available/72048 |
@maciej-autobon I have several questions about that. Those PyTorch builds use Python 3.6, while the requirements for Yolov5 is Python 3.8. Did you just replace 3.6 with 3.8 from those instructions in the link? What did you do? Also, did you not face any compatibility problems with installing Scipy? Did you use separate python environments? Thank you for the response. |
@iandanielsooknanan I just checked and:
|
@iandanielsooknanan my scipy version is 1.5.4, BTW. |
@maciej-autobon Thanks for the help thus far, I am compiling the steps I should take before I attempt another install of Yolov5 on the AGX (and not waste days fumbling around). I would like if you'd tell me if I'm on the right track. (Please keep in mind I'm a noob with all of this)
My initial thoughts are as follows: |
@iandanielsooknanan Gotcha. I think it might be most helpful if I write down the steps I'd follow:
BTW, now I see that I also commented out opencv -- that's probably because JetPack ships with a version of it so that I just wanted to save myself the trouble of building it myself. As you can see, scipy is left "as-is", meaning: apparently that wasn't an issue. Whether because an inherited version was sufficient or because it just installed without an issue -- I can't tell. But I didn't do anything with it, as you can see. Let me know if that helps. |
@maciej-autobon hey that is for the help this far, so I installed YOLOv5 dependencies such as pytorch, torchvision, matplotlib, script, bumpy, pandas using the terminal, and they installed successfully. I cloned YOLOv5 and ran the install requirements, but I got this error: "AssertionError: Python 3.7.0 required by YOLOv5, but Python 3.6.9 is currently installed" Do I install a newer Python and redo the install of each dependency using that version? Did you have to do this? |
Huh, interesting. You got that AssertionError after calling Can you paste here the whole traceback? |
@iandanielsooknanan I forgot to add: my commit hash is:
I don't have this assertion message, but I can see that in the latest version it's here and was added in this PR (16 days ago). I see three options for you:
What do you think? |
@iandanielsooknanan @maciej-autobon yes we have a Python version check in place here: Lines 111 to 118 in 407dc50
If 3.6.9 works well, can you please submit a PR to update this line to 3.6.9? Thanks! |
@iandanielsooknanan are you willing to take this PR? |
@maciej-autobon I explained myself poorly, I got the Assertion Error after trying to run the detect.py script. Now for the PR, @glenn-jocher @maciej-autobon I am concerned as to if this change is stable, how do we know using it with Python 3.6.9 won't have some unforeseen conflicts, as I only tested it with a webcam and video after all. @maciej-autobon In this comment you said your used it with 3.6.9, I'm guessing that you installed it before that check was added, so that explains why it worked for you and not me. Thanks for the help over the past week. You can take the PR if you'd like. Now that I have the chance, I have a couple of questions for @glenn-jocher , if that is alright: 1)I am purposing YOLOv5 as a human detector such that when a human is found on a webcam feed I want the system to perform an action, to achieve this I inserted my own code into the python script between lines 91 and 92. Lines 91 to 94 in bb13123
The code I inserted was from a python library that lets you turn any python script into a ROS node. I inserted the code there because I need this output as soon as a human is detected, I first tried storing this data to a text file, then using pickle files, but my other script (a ros node) that is awaiting the results needs this data as soon as possible and having two separate scripts read and write to a file at the same time did not turn out well. Was inserting my code there a good way of doing this? (this is my first run using machine learning and I found it odd the lack of resources on how to actually USE the results from a detector, not just look at the picture results) What do you recommend?
Thank you all for the excellent help! |
did u try the tensor rt with ur board. coral tpu gave me similar performance to ur board |
❔Question
Hi,
I am working on Nvidia Jetson Xavier AGX.
While using this repository as is, latest version with
yolov5s
model.When running
python3 detect.py --source 0 --img-size 640
inference time is 0.040s.When running
python3 detect.py --source 0 --img-size 256
inference time is 0.025s.I plan to try getting TensorRT to work on my system, and hope to get much higher FPS, but for now, I would be happy to know if those results seem reasonable or am I missing something
Additional context
OS: Ubuntu 18.04 (JetPack)
OPERATION MODE: MAXN (Maximum performance)
Python: 3.6
Pytorch: 1.6.0
CUDA: 10
Camera: 120 FPS camera (so I wish to achieve this inference FPS)
Thanks!
The text was updated successfully, but these errors were encountered: