Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

detect.py 30 FPS on Jetson Xavier AGX is normal? #960

Closed
AmirSa7 opened this issue Sep 13, 2020 · 28 comments
Closed

detect.py 30 FPS on Jetson Xavier AGX is normal? #960

AmirSa7 opened this issue Sep 13, 2020 · 28 comments
Labels
question Further information is requested Stale

Comments

@AmirSa7
Copy link

AmirSa7 commented Sep 13, 2020

❔Question

Hi,
I am working on Nvidia Jetson Xavier AGX.
While using this repository as is, latest version with yolov5s model.
When running python3 detect.py --source 0 --img-size 640 inference time is 0.040s.
When running python3 detect.py --source 0 --img-size 256 inference time is 0.025s.

I plan to try getting TensorRT to work on my system, and hope to get much higher FPS, but for now, I would be happy to know if those results seem reasonable or am I missing something

Additional context

OS: Ubuntu 18.04 (JetPack)
OPERATION MODE: MAXN (Maximum performance)
Python: 3.6
Pytorch: 1.6.0
CUDA: 10
Camera: 120 FPS camera (so I wish to achieve this inference FPS)

Thanks!

@AmirSa7 AmirSa7 added the question Further information is requested label Sep 13, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Sep 13, 2020

Hello @AmirSa7, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@batrlatom
Copy link
Contributor

batrlatom commented Sep 13, 2020

I was able to get something like 100fps of inference on agx xavier using tensorrt. 30fps is meaningful with bare pytorch. But there is a little problem, pre and post-processing steps will add another 10ms each and are bottleneck for me

@AmirSa7
Copy link
Author

AmirSa7 commented Sep 15, 2020

I was able to get something like 100fps of inference on agx xavier using tensorrt. 30fps is meaningful with bare pytorch. But there is a little problem, pre and post-processing steps will add another 10ms each and are bottleneck for me

Thanks for sharing!
Trying this: https://github.com/wang-xinyu/tensorrtx TensorRT implementation I get same results as you do.
320x320 input takes 0.015s (66 FPS) but total time including proccessing is 0.03s (33 FPS)

@MuhammadAsadJaved
Copy link

@AmirSa7
I am unable to use in Xavier NX. I am using without tensorrt. Can you please explain steps you followed?

My environment details are as follows:

python 3.6.9
pytorch 1.5.0
torchvision 0.6.0
opencv 3.4.0

I got following error.
Screenshot from 2020-10-13 17-14-13

@berkantay
Copy link

berkantay commented Nov 10, 2020

@MuhammadAsadJaved have you checked your gpu status using watch -n 0.1 nvidia-smi? May cuda be out of memory

I am planning to run yolov5s on jetson tx2 but did not try yet. I am here for the issues that I might face in the future.

@123456789mojtaba
Copy link

Question

Hi,
I am working on Nvidia Jetson Xavier AGX.
While using this repository as is, latest version with yolov5s model.
When running python3 detect.py --source 0 --img-size 640 inference time is 0.040s.
When running python3 detect.py --source 0 --img-size 256 inference time is 0.025s.

I plan to try getting TensorRT to work on my system, and hope to get much higher FPS, but for now, I would be happy to know if those results seem reasonable or am I missing something

Additional context

OS: Ubuntu 18.04 (JetPack)
OPERATION MODE: MAXN (Maximum performance)
Python: 3.6
Pytorch: 1.6.0
CUDA: 10
Camera: 120 FPS camera (so I wish to achieve this inference FPS)

Thanks!

what changes you did in detect.py file?
when I run I use tpython3 detect.py --source 0 --img-size 640 in jetson tx2 it shows this error
VIDEOIO ERROR: V4L: Unable to get camera FPS
what should I do?
Thanks

@MuhammadAsadJaved
Copy link

MuhammadAsadJaved commented Nov 14, 2020 via email

@MiroslavDirk
Copy link

嗨,
我正在Nvidia Jetson Xavier AGX上工作。
按原样使用此存储库时,带有yolov5s模型的最新版本。
运行时,python3 detect.py --source 0 --img-size 640推理时间为0.040s。
运行时,python3 detect.py --source 0 --img-size 256推理时间为0.025s。
我计划尝试让TensorRT在我的系统上运行,并希望获得更高的FPS,但就目前而言,我很乐意知道这些结果是否合理还是我错过了一些东西

其他背景

操作系统:Ubuntu 18.04(JetPack)
操作模式:MAXN(最高性能)
Python:3.6
Pytorch:1.6.0
CUDA:10
摄像头:120 FPS摄像头(所以我希望实现这种推断FPS)
谢谢!

您对detect.py文件进行了哪些更改?
当我运行时,在jetson tx2中使用tpython3 detect.py --source 0 --img-size 640,它显示此错误
VIDEOIO ERROR:V4L:无法获取相机FPS
,我该怎么办?
谢谢

i have a same error in jetson nano

@MiroslavDirk
Copy link

so what should I change in detec.py
It shows this error in jetson nano, I run use python3 detect.py --source 0 --img-size 416
but i use python3 detect.py --source ./video.mp4 --img-size 416 is ok

Model Summary: 232 layers, 7249215 parameters, 0 gradients
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
VIDEOIO ERROR: V4L: Unable to get camera FPS
1/1: 1... success (3264x2464 at 99.00 FPS).

@123456789mojtaba
Copy link

123456789mojtaba commented Dec 4, 2020 via email

@berkantay
Copy link

berkantay commented Dec 4, 2020

when I run python3 detect.py for images and videos in works right in jetson nano. but when I wanna use jetson nano camera to detect I write python3 detect.py --source 0 it doesn't recognize myy camera. I think I should add some commands to detect.py file.Can you lead me? Thank you

On Fri, Dec 4, 2020 at 11:13 AM MiroslavDirk @.***> wrote: 题 嗨, 我正在Nvidia Jetson Xavier AGX上工作。 按原样使用此存储库时,带有yolov5s模型的最新版本。 运行时,python3 detect.py --source 0 --img-size 640推理时间为0.040s。 运行时,python3 detect.py --source 0 --img-size 256推理时间为0.025s。 我计划尝试让TensorRT在我的系统上运行,并希望获得更高的FPS,但就目前而言,我很乐意知道这些结果是否合理还是我错过了一些东西 其他背景 操作系统:Ubuntu 18.04(JetPack) 操作模式:MAXN(最高性能) Python:3.6 Pytorch:1.6.0 CUDA:10 摄像头:120 FPS摄像头(所以我希望实现这种推断FPS) 谢谢! 您对detect.py文件进行了哪些更改? 当我运行时,在jetson tx2中使用tpython3 detect.py --source 0 --img-size 640,它显示此错误 VIDEOIO ERROR:V4L:无法获取相机FPS ,我该怎么办? 谢谢 i have a same error in jetson nano — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#960 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM3F722SSZ55TPAZXNC2THDSTCHKTANCNFSM4RKUVVOA .

you can compile opencv from source with gstreamer support then you can pass the pipeline string to the videocapture function for csi camera support.

see https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.5.0_Jetson.sh script. You can modify the flags depending on your needs.

@MuhammadAsadJaved
Copy link

when I run python3 detect.py for images and videos in works right in jetson nano. but when I wanna use jetson nano camera to detect I write python3 detect.py --source 0 it doesn't recognize myy camera. I think I should add some commands to detect.py file.Can you lead me? Thank you

On Fri, Dec 4, 2020 at 11:13 AM MiroslavDirk @.***> wrote: 题 嗨, 我正在Nvidia Jetson Xavier AGX上工作。 按原样使用此存储库时,带有yolov5s模型的最新版本。 运行时,python3 detect.py --source 0 --img-size 640推理时间为0.040s。 运行时,python3 detect.py --source 0 --img-size 256推理时间为0.025s。 我计划尝试让TensorRT在我的系统上运行,并希望获得更高的FPS,但就目前而言,我很乐意知道这些结果是否合理还是我错过了一些东西 其他背景 操作系统:Ubuntu 18.04(JetPack) 操作模式:MAXN(最高性能) Python:3.6 Pytorch:1.6.0 CUDA:10 摄像头:120 FPS摄像头(所以我希望实现这种推断FPS) 谢谢! 您对detect.py文件进行了哪些更改? 当我运行时,在jetson tx2中使用tpython3 detect.py --source 0 --img-size 640,它显示此错误 VIDEOIO ERROR:V4L:无法获取相机FPS ,我该怎么办? 谢谢 i have a same error in jetson nano — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#960 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM3F722SSZ55TPAZXNC2THDSTCHKTANCNFSM4RKUVVOA .

No need to add any command. just make sure your camera is working. you can test camera online on any website from google "test webcam"

@123456789mojtaba
Copy link

123456789mojtaba commented Dec 4, 2020 via email

@github-actions
Copy link
Contributor

github-actions bot commented Jan 4, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@iandanielsooknanan
Copy link

@AmirSa7 Can you tell me how you installed Yolov5 on the AGX? Mine didn't go to well and I'd like to know what instructions you followed.

@maciej-autobon
Copy link

@iandanielsooknanan you need to fetch a dedicated PyTorch build: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-8-0-now-available/72048
Other than that, I didn't face any issues.

@iandanielsooknanan
Copy link

@maciej-autobon I have several questions about that. Those PyTorch builds use Python 3.6, while the requirements for Yolov5 is Python 3.8. Did you just replace 3.6 with 3.8 from those instructions in the link? What did you do?

Also, did you not face any compatibility problems with installing Scipy? Did you use separate python environments? Thank you for the response.

@maciej-autobon
Copy link

@iandanielsooknanan I just checked and:

  1. yes, I have a virtualenv for that
  2. it uses Python 3.6.9
  3. I can't recall problems installing scipy, TBH

@maciej-autobon
Copy link

@iandanielsooknanan my scipy version is 1.5.4, BTW.

@iandanielsooknanan
Copy link

@maciej-autobon Thanks for the help thus far, I am compiling the steps I should take before I attempt another install of Yolov5 on the AGX (and not waste days fumbling around). I would like if you'd tell me if I'm on the right track. (Please keep in mind I'm a noob with all of this)
On my first try I got errors when installing some of the requirements, for example with Torch it said it didn't find any suitable versions. Therefore I'm thinking of installing these ones separately (hopefully this shouldn't be a problem)
So:

  1. Did you install everything one by one or just run the requirements file or a mix of both?
  2. Which directory should I be in when I do this.
    and
  3. When I'm installing PyTorch using the link you sent do I simply follow the python 3.6 instructions as is? The Wheel section, not the build section right?

My initial thoughts are as follows:
Install PyTorch, run the requirements install, if any errors show up, install that component on its own and retry. What do you think?

@maciej-autobon
Copy link

maciej-autobon commented May 24, 2021

@iandanielsooknanan Gotcha.

I think it might be most helpful if I write down the steps I'd follow:

  1. Install torch + torchvision globally (not in a virtualenv of any kind) following the instructions I pasted previously. If I remember correctly, it will also require a particular version of numpy, so keep that in mind.
  2. Install matplotlib globally since pip install fails for that. I'm not sure but that's either sudo apt-get install python-matplotlib or sudo apt-get install python3-matplotlib (I think it might be the former).
  3. Create a virtualenv that uses those globally installed versions of: torch, torchvision, numpy, and matplotlib. Something along the lines of python3 -m venv --system-site-packages ~/virtualenvs/yolov5 followed by source ~/virtualenvs/yolov5/bin/activate.
  4. Comment out the lines of the requirements.txt in the yolov5 repo that we already have. I ran git diff requirements.txt in my copy of the repo and this is what I saw:
diff --git a/requirements.txt b/requirements.txt
index fd187eb..f98bd2e 100755
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,14 +1,14 @@
 # pip install -r requirements.txt

 # base ----------------------------------------
-matplotlib>=3.2.2
-numpy>=1.18.5
-opencv-python>=4.1.2
+# matplotlib>=3.2.2
+# numpy>=1.18.5
+# opencv-python>=4.1.2
 Pillow
 PyYAML>=5.3.1
 scipy>=1.4.1
-torch>=1.7.0
-torchvision>=0.8.1
+# torch>=1.7.0
+# torchvision>=0.8.1
 tqdm>=4.41.0

 # logging -------------------------------------
@@ -16,8 +16,8 @@ tensorboard>=2.4.1
 # wandb

 # plotting ------------------------------------
-seaborn>=0.11.0
-pandas
+# seaborn>=0.11.0
+# pandas

 # export --------------------------------------
 # coremltools>=4.1

BTW, now I see that I also commented out opencv -- that's probably because JetPack ships with a version of it so that I just wanted to save myself the trouble of building it myself.

As you can see, scipy is left "as-is", meaning: apparently that wasn't an issue. Whether because an inherited version was sufficient or because it just installed without an issue -- I can't tell. But I didn't do anything with it, as you can see.

Let me know if that helps.

@iandanielsooknanan
Copy link

@maciej-autobon hey that is for the help this far, so I installed YOLOv5 dependencies such as pytorch, torchvision, matplotlib, script, bumpy, pandas using the terminal, and they installed successfully.

I cloned YOLOv5 and ran the install requirements, but I got this error: "AssertionError: Python 3.7.0 required by YOLOv5, but Python 3.6.9 is currently installed"

Do I install a newer Python and redo the install of each dependency using that version? Did you have to do this?

@maciej-autobon
Copy link

Huh, interesting. You got that AssertionError after calling pip install -r requirements.txt?

Can you paste here the whole traceback?

@maciej-autobon
Copy link

@iandanielsooknanan I forgot to add: my commit hash is:

commit 14d2d2d75fff27a9deb183c9cb76f107f43ca3ad (HEAD -> master, origin/master, origin/HEAD)
Author: Glenn Jocher <glenn.jocher@ultralytics.com>
Date:   Thu Apr 22 20:27:32 2021 +0200

    Update google_utils.py (#2900)

I don't have this assertion message, but I can see that in the latest version it's here and was added in this PR (16 days ago).

I see three options for you:

  1. See what happens if you remove this check and just use Python 3.6.9 (my money's on that it will fail, but who knows?)
  2. Revert to an earlier version and once PyTorch for a higher Python version is released -- use that one
  3. Use Cpp for inference (I use this repo) since it's faster and less resource consuming (but a much more annoying pain in the a** to use) which just uses an exported .pt file and no Python version will stand in your way

What do you think?

@glenn-jocher
Copy link
Member

@iandanielsooknanan @maciej-autobon yes we have a Python version check in place here:

yolov5/utils/general.py

Lines 111 to 118 in 407dc50

def check_python(minimum='3.7.0', required=True):
# Check current python version vs. required python version
current = platform.python_version()
result = pkg.parse_version(current) >= pkg.parse_version(minimum)
if required:
assert result, f'Python {minimum} required by YOLOv5, but Python {current} is currently installed'
return result

If 3.6.9 works well, can you please submit a PR to update this line to 3.6.9? Thanks!

@maciej-autobon
Copy link

@iandanielsooknanan are you willing to take this PR?

@iandanielsooknanan
Copy link

iandanielsooknanan commented May 27, 2021

@maciej-autobon I explained myself poorly, I got the Assertion Error after trying to run the detect.py script.
However, after changing the Python version requirement YOLOv5 worked!!!
I tested it with a video and live webcam feed.

Now for the PR, @glenn-jocher @maciej-autobon I am concerned as to if this change is stable, how do we know using it with Python 3.6.9 won't have some unforeseen conflicts, as I only tested it with a webcam and video after all. @maciej-autobon In this comment you said your used it with 3.6.9, I'm guessing that you installed it before that check was added, so that explains why it worked for you and not me. Thanks for the help over the past week. You can take the PR if you'd like.

Now that I have the chance, I have a couple of questions for @glenn-jocher , if that is alright:

1)I am purposing YOLOv5 as a human detector such that when a human is found on a webcam feed I want the system to perform an action, to achieve this I inserted my own code into the python script between lines 91 and 92.

yolov5/detect.py

Lines 91 to 94 in bb13123

gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
imc = im0.copy() if opt.save_crop else im0 # for opt.save_crop
if len(det):
# Rescale boxes from img_size to im0 size

The code I inserted was from a python library that lets you turn any python script into a ROS node.
I inserted the code there because I need this output as soon as a human is detected, I first tried storing this data to a text file, then using pickle files, but my other script (a ros node) that is awaiting the results needs this data as soon as possible and having two separate scripts read and write to a file at the same time did not turn out well. Was inserting my code there a good way of doing this? (this is my first run using machine learning and I found it odd the lack of resources on how to actually USE the results from a detector, not just look at the picture results) What do you recommend?

  1. When running YOLOv5 using the webcam as a source, on few occasions when exiting the script using Ctrl+C I might get an error, and on rare occasions it would actually get stuck and take a while to exit, sometimes the webcam stream window remains open, the image is stuck and PC resources are used. Therefore, what is the recommended way of ending/exiting the detection process so that no issues occur?

  2. To only show results for humans detected, you said we can set --classes 0, but does this only limit the results shown or does it actually reduce the amount of computation occurring? Basically, I'd like all resources to focus on humans, can you tell me how to do this?

  3. While using --nosave prevents results from being written, empty folders are still created every time a detection is ran, what is the recommended way to prevent this folder creation? I cut out some code but I am a novice after all.

  4. I need to look out for humans from several camera streams on the same PC. Is it recommended to run several scripts at the same time, specifying the source to different cameras, or is there a way to get this done with one script?

Thank you all for the excellent help!

@taloot
Copy link

taloot commented Jul 4, 2021

Question

Hi,
I am working on Nvidia Jetson Xavier AGX.
While using this repository as is, latest version with yolov5s model.
When running python3 detect.py --source 0 --img-size 640 inference time is 0.040s.
When running python3 detect.py --source 0 --img-size 256 inference time is 0.025s.
I plan to try getting TensorRT to work on my system, and hope to get much higher FPS, but for now, I would be happy to know if those results seem reasonable or am I missing something

Additional context

OS: Ubuntu 18.04 (JetPack)
OPERATION MODE: MAXN (Maximum performance)
Python: 3.6
Pytorch: 1.6.0
CUDA: 10
Camera: 120 FPS camera (so I wish to achieve this inference FPS)
Thanks!

what changes you did in detect.py file?
when I run I use tpython3 detect.py --source 0 --img-size 640 in jetson tx2 it shows this error
VIDEOIO ERROR: V4L: Unable to get camera FPS
what should I do?
Thanks

did u try the tensor rt with ur board. coral tpu gave me similar performance to ur board

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

10 participants