-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to execute YOLOv5 's detect.py from another script in python? #8063
Comments
|
I see. |
@lessleeb I'd recommend using PyTorch Hub inference |
@glenn-jocher thank you. import torch #Model #Images #Inference #Results |
@lessleeb see PyTorch Hub tutorial below for details on customizing inference parameters. YOLOv5 Tutorials
Good luck 🍀 and let us know if you have any other questions! |
@glenn-jocher great, thank you! |
@lessleeb you're welcome! If you have any more questions or need further assistance, feel free to ask. Good luck with your project! |
I have a question: |
Hello @HieuHQ112, Thank you for your question! To clarify, the
For more detailed information, you can refer to the PyTorch Hub tutorial. If you encounter any issues, please ensure you are using the latest versions of Good luck with your project on the Jetson Nano! If you have any further questions, feel free to ask. |
@glenn-jocher thank you, |
Hello @HieuHQ112, Great to hear that your script is working well with images! For video or streaming input from a camera, you can use OpenCV to capture frames and then pass them to the YOLOv5 model for inference. Here's a step-by-step guide to help you get started:
This script will capture frames from a video file or a webcam, run YOLOv5 inference on each frame, and display the results in real-time. You can modify the script to suit your specific needs, such as saving the output or performing additional post-processing. If you encounter any issues, please ensure you are using the latest versions of Good luck with your project, and feel free to reach out if you have any further questions! 😊 |
@glenn-jocher it worked well on Jetson even with CSI camera. Thank you so much!!! |
Hello @HieuHQ112, That's fantastic to hear! 🎉 We're thrilled that YOLOv5 is working well on your Jetson Nano with the CSI camera. The Ultralytics team and the broader YOLO community are always here to support your AI journey. If you have any more questions or need further assistance, feel free to ask. Keep up the great work, and happy detecting! 😊 |
Hello @glenn-jocher, |
Hello @HieuHQ112, Absolutely, you can save the video with bounding box results after detection. Here's how you can modify your existing script to save the processed frames into a video file using OpenCV:
This script will save the video with bounding box results to If you encounter any issues, please ensure you are using the latest versions of Feel free to reach out if you have any further questions. Happy coding! 😊 |
thank you @glenn-jocher, |
Hello @HieuHQ112, Thank you for your feedback! I'm glad to assist further.
Here's the updated code snippet with these changes: import cv2
import torch
# Load the model
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt', source='local')
# Open video file or capture device (0 for the first camera)
cap = cv2.VideoCapture('path/to/video.mp4') # or cap = cv2.VideoCapture(0) for webcam
# Get video properties
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Define the codec and create VideoWriter object
output_path = '/content/drive/MyDrive/YOLOv5_results/output_video.avi'
out = cv2.VideoWriter(output_path, cv2.VideoWriter_fourcc(*'XVID'), fps, (width, height))
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Inference
results = model(frame)
# Render results
rendered_frame = results.render()[0]
# Write the frame into the file
out.write(rendered_frame)
# Display the frame with results (optional)
cv2.imshow('YOLOv5 Inference', rendered_frame)
# Break the loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows() If you encounter any issues, please ensure you are using the latest versions of Feel free to reach out if you have any further questions. Happy coding! 😊 |
@glenn-jocher thank you, I'wll try it. |
Hello @HieuHQ112, Thank you for your message and for sharing the screenshot! It sounds like the issue might be related to the environment setup on your new Linux image. Here are a few tips to ensure that the model is loaded from the local source correctly:
If the issue persists, please provide additional details about any error messages or logs you encounter. This will help us diagnose the problem more effectively. Thank you for your patience and understanding. If you have any further questions, feel free to ask. Happy coding! 😊 |
Search before asking
Question
Hi, I have trained YOLOv5 with a customized dataset. I plan to use yolo detections and enhance this output with additional code (post-processing), so at some point in my main code, I would like to execute the detector (YOLO's detect.py) which is stored locally, and get the output.
I have checked the method using torch.hub.load to load YOLO from the local repo and local model, but maybe this is not exactly what I need. I would like to use my customized detect.py file which was set already, in the function def parse_opt() and run inferences in the location specified there.
So my question is how can I execute YOLO detector from another script and retrieve the outputs for every frame?
Thank you for your help
Additional
Windows
Python 3
The text was updated successfully, but these errors were encountered: