Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov5 crashes with RTSP stream analysis #2226

Closed
philippneugebauer opened this issue Feb 16, 2021 · 34 comments · Fixed by #2232 or #2231
Closed

Yolov5 crashes with RTSP stream analysis #2226

philippneugebauer opened this issue Feb 16, 2021 · 34 comments · Fixed by #2232 or #2231
Labels
bug Something isn't working

Comments

@philippneugebauer
Copy link

🐛 Bug

If I want to analyze an rtsp stream with Yolov5 in a docker container, regardless the latest or the v4.0 version, it crashes.

To Reproduce (REQUIRED)

Input:

docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server

ffmpeg -i video.mp4 -s 640x480 -c:v libx264 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/analysis

docker run -it ultralytics/yolov5:latest

python3 detect.py --source rtsp://host.docker.internal:8554/analysis --weights yolov5s.pt --conf 0.25 --save-txt

Output:

Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=True, source='rtsp://host.docker.internal:8554/analysis', update=False, view_img=False, weights=['yolov5s.pt'])
/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at  ../c10/cuda/CUDAFunctions.cpp:100.)
  return torch._C._cuda_getDeviceCount() > 0
YOLOv5 v4.0-80-gf8464b4 torch 1.8.0a0+1606899 CPU

Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS
[h264 @ 0x55e674656100] co located POCs unavailable
[h264 @ 0x55e674656100] mmco: unref short failure
[h264 @ 0x55e675117cc0] co located POCs unavailable
[h264 @ 0x55e674dbb300] mmco: unref short failure
[h264 @ 0x55e674ec09c0] co located POCs unavailable
1/1: rtsp://host.docker.internal:8554/analysis...  success (640x480 at 30.00 FPS).

0: 480x640 13 persons, 1 tennis racket, Done. (2.089s)
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/opt/conda/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted

Expected behavior

Doing the analysis

Environment

  • OS: Yolov5 docker container on macos Catalina
  • GPU none
@philippneugebauer philippneugebauer added the bug Something isn't working label Feb 16, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Feb 16, 2021

👋 Hello @philippneugebauer, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@philippneugebauer you may want to verify that publicly accessible streams are working for you, before trying custom container connections. This works locally for me for example:

python detect.py --source rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov

Be aware this displays a live preview of detection results. You may need to disable the preview component in non-desktop environments by setting view_img = False:

yolov5/detect.py

Lines 45 to 54 in f8464b4

# Set Dataloader
vid_path, vid_writer = None, None
if webcam:
view_img = True
cudnn.benchmark = True # set True to speed up constant image size inference
dataset = LoadStreams(source, img_size=imgsz, stride=stride)
else:
save_img = True
dataset = LoadImages(source, img_size=imgsz, stride=stride)

@philippneugebauer
Copy link
Author

Damit, yeah, you're right, the view_img = True was the problem. Sorry :/

@glenn-jocher
Copy link
Member

@philippneugebauer I think we looked at automatically disabling this before but found no easy way to automatically detect non-desktop environments. If you think of any ideas for more robustly handling this please submit a PR!

@philippneugebauer
Copy link
Author

I was confused because with the normal video analysis I had no trouble at all. Maybe it's an idea to manually enable the view_img option? It's even part of the arguments so I wouldn't expect this to be a default

@glenn-jocher
Copy link
Member

glenn-jocher commented Feb 16, 2021

@philippneugebauer yes that's true, that is confusing. We turn it on by default for streaming sources since the main use case is new users testing out a local webcam with detect.py --source 0, and they tend to raise issues if they don't see the video displayed.

I think what we need is a line like view_img &= check_imshow() to check that the environment actually supports viewing images. I think this would cause an image to briefly appear for everyone during check_imshow() though.

import cv2
import numpy as np

def check_imshow():
    # Check if environment supports image display
    try:
        cv2.imshow('test', np.zeros((320, 320, 3)))
        cv2.waitKey(1)
        cv2.destroyAllWindows()
        cv2.waitKey(1)
        return True
    except:
        print('WARNING: Environment does not support cv2.imshow() or PIL Image.show() image previews')
        return False

print(check_imshow())

@philippneugebauer
Copy link
Author

yeah, I see your point. So that would be the ideal solution.

Maybe adding a message to the exception that non-GUI environments should disable the view_img manually. It gives a nice hint for the user and is a simple workaround

@glenn-jocher
Copy link
Member

@philippneugebauer good idea! I've created PR #2231 that should address this. Can you give it a quick lookover?

@philippneugebauer
Copy link
Author

haha, so I meant the exception I encountered, but it obviously also fit your code :D. Your solution is more useful though. I am not a Python expert but it looks good to me. Let me know if I should test the code locally

@glenn-jocher
Copy link
Member

@philippneugebauer yes good point. I've updated now. Looks good in Colab, the Exception actually provides meaningful info.

@ozett
Copy link

ozett commented Sep 23, 2021

today i ran into this problem on cli-only

(y5) olaf@ub2004yolo5:~/yolov5$ python detect.py --source rtsp://192.168.14.108/axis-media/media.amp
detect: weights=yolov5s.pt, source=rtsp://192.168.14.108/axis-media/media.amp, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False
YOLOv5 🚀 v5.0-455-g59aae85 torch 1.9.1+cu102 CPU

Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/olaf/.virtualenvs/y5/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted (core dumped)
(y5) olaf@ub2004yolo5:~/yolov5$

i will try to fiddle with detect.py on streaming source while on commandLine-only

edit: surprise. the code-change is in, but the error still persits...
image

edit2: hmmm, where are the hints for correct options? in the code?

(y5) olaf@ub2004yolo5:~/yolov5$ python detect.py --view_img=false --source rtsp://192.168.14.108/axis-media/media.amp
usage: detect.py [-h] [--weights WEIGHTS [WEIGHTS ...]] [--source SOURCE] [--imgsz IMGSZ [IMGSZ ...]] [--conf-thres CONF_THRES] [--iou-thres IOU_THRES]
                 [--max-det MAX_DET] [--device DEVICE] [--view-img] [--save-txt] [--save-conf] [--save-crop] [--nosave] [--classes CLASSES [CLASSES ...]]
                 [--agnostic-nms] [--augment] [--visualize] [--update] [--project PROJECT] [--name NAME] [--exist-ok] [--line-thickness LINE_THICKNESS]
                 [--hide-labels] [--hide-conf] [--half]
detect.py: error: unrecognized arguments: --view_img=false
(y5) olaf@ub2004yolo5:~/yolov5$ ^C
(y5) olaf@ub2004yolo5:~/yolov5$ python detect.py --view_img False --source rtsp://192.168.14.108/axis-media/media.amp
usage: detect.py [-h] [--weights WEIGHTS [WEIGHTS ...]] [--source SOURCE] [--imgsz IMGSZ [IMGSZ ...]] [--conf-thres CONF_THRES] [--iou-thres IOU_THRES]
                 [--max-det MAX_DET] [--device DEVICE] [--view-img] [--save-txt] [--save-conf] [--save-crop] [--nosave] [--classes CLASSES [CLASSES ...]]
                 [--agnostic-nms] [--augment] [--visualize] [--update] [--project PROJECT] [--name NAME] [--exist-ok] [--line-thickness LINE_THICKNESS]
                 [--hide-labels] [--hide-conf] [--half]
detect.py: error: unrecognized arguments: --view_img False

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 24, 2021

@ozett YOLOv5 arguments don't use underscores, they use dashes: --view-img

About your original question I'm not able to reproduce your command as your stream is not available:
[tcp @ 0x7fe94801ee00] Connection to tcp://192.168.14.108:554?timeout=0 failed: Connection refused

We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible that still produces the same problem
  • Complete – Provide all parts someone else needs to reproduce your problem in the question itself
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

  • Current – Verify that your code is up-to-date with current GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been resolved by previous commits.
  • Unmodified – Your problem must be reproducible without any modifications to the codebase in this repository. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@ozett
Copy link

ozett commented Sep 25, 2021

Hi thanks for looking into this.
i did the dashes wrong. but even without options i cannot do inference on the cli.
the stream seams not the problem, because i do this without a graphical environment on ssh-cli.
should that work, or is it intended not to?

detect: weights=yolov5s.pt, source=rtsp://192.168.14.108/axis-media/media.amp, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False
YOLOv5 🚀 v5.0-455-g59aae85 torch 1.9.1+cu102 CPU

Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/olaf/.virtualenvs/y5/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Aborted (core dumped)
(y5) olaf@ub2004yolo5:~/yolov5$

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 25, 2021

@ozett seems like you are missing some cv2 dependencies. You may want to use one of our verified environments while you debug your local environment.

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@jjqoie
Copy link

jjqoie commented Nov 14, 2021

Hi all
I just run into the same error message qt.qpa.plugin: "Could not load the Qt platform plugin xcb" but triggered in a slightly different way
I run YoloV5 on my home iot server which doesn't have a Monitor. For testing I'm using a jupyterlab in a docker container
To fix this I installed opencv headless - see link
https://forum.qt.io/topic/119109/using-pyqt5-with-opencv-python-cv2-causes-error-could-not-load-qt-platform-plugin-xcb-even-though-it-was-found/7
Seems that the cv2 import itself is causing it?

After installing pip install opencv-python-headless a problem in the datasets.py line 350 with cv2.waitkey appeared
if not all(x.is_alive() for x in self.threads): # or cv2.waitKey(1) == ord('q'): # q to quit
After commenting this part out yolov5 start processing frames

PS: YoloV5 correctly detected imshow is not supported and print it out
WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays

@andiandhika
Copy link

https://user-images.githubusercontent.com/62016426/170547790-2e7bb51d-bfea-4384-a7e0-452d071f69b7.PNG

hi , what should I do if something like this happens?

@afrahthahir
Copy link

https://user-images.githubusercontent.com/55926806/170808240-284a1835-ffc1-47b7-96b5-758813116c8b.png
hii, I am working yolov5 on google colab. When I use the rtsp link which is valid, Could not get the results as expected.
Pls help me on this.

@glenn-jocher
Copy link
Member

glenn-jocher commented May 28, 2022

@afrahthahir 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible to produce the problem
  • Complete – Provide all parts someone else needs to reproduce the problem
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

For Ultralytics to provide assistance your code should also be:

  • Current – Verify that your code is up-to-date with GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been solved in master.
  • Unmodified – Your problem must be reproducible using official YOLOv5 code without changes. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@andiandhika
Copy link

andiandhika commented May 28, 2022

okay thanks for the explanation, I'll try to explain again, I use google colab to run the code, when I want to run !python detect.py --source 0 to run using the webcam there is a notification WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays cv2.imshow() is disabled in Google Colab environments

I don't know how to fix it, Pls help me on this thanks

Capture

@glenn-jocher
Copy link
Member

@andiandhika local webcams are only available in local environments, assuming you have one on your computer.

@andiandhika
Copy link

@andiandhika local webcams are only available in local environments, assuming you have one on your computer.

Thank you, I want to try again

@andiandhika
Copy link

andiandhika commented May 29, 2022

hi i have tried again but the result is the same , i have used local camera, with command !python detect.py --source 0 but still there is a notification WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays cv2.imshow() is disabled in Google Colab environments

Capture

@glenn-jocher
Copy link
Member

@andiandhika colab is not a local environment. Local means right there in front of you, i.e. YOUR computer.

@andiandhika
Copy link

andiandhika commented May 29, 2022

@andiandhika colab is not a local environment. Local means right there in front of you, i.e. YOUR computer.

I don't think so, I tried using this code and it worked only to open the webcam

start streaming video from webcam

video_stream()

label for video

label_html = 'Capturing...'

initialze bounding box to empty

bbox = ''
count = 0
while True:
js_reply = video_frame(label_html, bbox)
if not js_reply:
break

# convert JS response to OpenCV Image
img = js_to_image(js_reply["img"])

# create transparent overlay for bounding box
bbox_array = np.zeros([480,640,4], dtype=np.uint8)

# grayscale image for face detection
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

# get face region coordinates
faces = face_cascade.detectMultiScale(gray)
# get face bounding box for overlay
for (x,y,w,h) in faces:
  bbox_array = cv2.rectangle(bbox_array,(x,y),(x+w,y+h),(255,0,0),2)

bbox_array[:,:,3] = (bbox_array.max(axis = 2) > 0 ).astype(int) * 255
# convert overlay of bbox into bytes
bbox_bytes = bbox_to_bytes(bbox_array)
# update bbox so next frame gets new overlay
bbox = bbox_bytes

@andiandhika
Copy link

@andiandhika colab is not a local environment. Local means right there in front of you, i.e. YOUR computer.

I don't think so, I tried using this code and it worked only to open the webcam

start streaming video from webcam

video_stream()

label for video

label_html = 'Capturing...'

initialze bounding box to empty

bbox = '' count = 0 while True: js_reply = video_frame(label_html, bbox) if not js_reply: break

# convert JS response to OpenCV Image
img = js_to_image(js_reply["img"])

# create transparent overlay for bounding box
bbox_array = np.zeros([480,640,4], dtype=np.uint8)

# grayscale image for face detection
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

# get face region coordinates
faces = face_cascade.detectMultiScale(gray)
# get face bounding box for overlay
for (x,y,w,h) in faces:
  bbox_array = cv2.rectangle(bbox_array,(x,y),(x+w,y+h),(255,0,0),2)

bbox_array[:,:,3] = (bbox_array.max(axis = 2) > 0 ).astype(int) * 255
# convert overlay of bbox into bytes
bbox_bytes = bbox_to_bytes(bbox_array)
# update bbox so next frame gets new overlay
bbox = bbox_bytes

it would be better to make changes to the code so that it can be run using a webcam, thank you in advance

@glenn-jocher
Copy link
Member

glenn-jocher commented May 29, 2022

@andiandhika hi, thank you for your feature suggestion on how to improve YOLOv5 🚀!

The fastest and easiest way to incorporate your ideas into the official codebase is to submit a Pull Request (PR) implementing your idea, and if applicable providing before and after profiling/inference/training results to help us understand the improvement your feature provides. This allows us to directly see the changes in the code and to understand how they affect workflows and performance.

Please see our ✅ Contributing Guide to get started.

@andiandhika
Copy link

andiandhika commented May 29, 2022

@andiandhika hi, thank you for your feature suggestion on how to improve YOLOv5 🚀!

The fastest and easiest way to incorporate your ideas into the official codebase is to submit a Pull Request (PR) implementing your idea, and if applicable providing before and after profiling/inference/training results to help us understand the improvement your feature provides. This allows us to directly see the changes in the code and to understand how they affect workflows and performance.

Please see our ✅ Contributing Guide to get started.

ok thanks in advance, definitely when i know more about it

@andiandhika
Copy link

hi, i have done training i want to ask what is the difference between these two mAP@? which I marked in red

image

@afrahthahir
Copy link

afrahthahir commented Oct 11, 2022 via email

@afrahthahir
Copy link

afrahthahir commented Oct 11, 2022 via email

@mansi733
Copy link

Hi,
I am using torch.hub.load and I want to access CCTV camera with the IP address, please check the screenshot is the correct way to connect CCTV camera.
image

@glenn-jocher
Copy link
Member

@mansi733 hello,

Thank you for reaching out. Unfortunately, we cannot see the screenshot that you mentioned in your message. Can you please try to attach it again or provide more details on how you are trying to access the CCTV camera with the IP address? This information will help us better understand your issue and provide you with the appropriate solution.

Thank you.

@mansi733
Copy link

mansi733 commented May 13, 2023

@mansi733 hello,

Thank you for reaching out. Unfortunately, we cannot see the screenshot that you mentioned in your message. Can you please try to attach it again or provide more details on how you are trying to access the CCTV camera with the IP address? This information will help us better understand your issue and provide you with the appropriate solution.

Thank you.

Yes, Please check this code

conf_score = 0.40 # change the value for confiedence score.

parser = argparse.ArgumentParser(description="Camera type.")
parser.add_argument("--cam", type=str, default=0)
args = parser.parse_args()

camera_type = 'rtsp://192.1.2.68/'

model = torch.hub.load('ultralytics/yolov5', 'custom', 'C:/Users/HP/Desktop/Fire-cmd/client.pt'

vid = cv2.VideoCapture(camera_type)
vid.open(camera_type)

classes = model.names
-Thank You.

@glenn-jocher
Copy link
Member

@mansi733 Thank you for providing the code snippet to help us better understand your issue. Based on your code, it seems that you are trying to access an IP camera using RTSP protocol and then loading a custom YOLOv5 model to do object detection on the camera stream.

One thing to note is that the IP camera address and port number need to be specified in the camera_type variable. If the camera requires a username and password for authentication, you'll also need to include them in the address, like rtsp://username:password@192.1.2.68:554/.

Additionally, make sure the camera is connected and streaming video before running the script. If you're still having issues, you can try checking if the camera address and credentials are correct, or try a different protocol (like HTTP) to access the camera.

I hope this helps. Let us know if you have any further questions or concerns. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants