Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to execute YOLOv5 's detect.py from another script in python? #8063

Closed
1 task done
lessleeb opened this issue Jun 1, 2022 · 19 comments
Closed
1 task done

How to execute YOLOv5 's detect.py from another script in python? #8063

lessleeb opened this issue Jun 1, 2022 · 19 comments
Labels
question Further information is requested

Comments

@lessleeb
Copy link

lessleeb commented Jun 1, 2022

Search before asking

Question

Hi, I have trained YOLOv5 with a customized dataset. I plan to use yolo detections and enhance this output with additional code (post-processing), so at some point in my main code, I would like to execute the detector (YOLO's detect.py) which is stored locally, and get the output.
I have checked the method using torch.hub.load to load YOLO from the local repo and local model, but maybe this is not exactly what I need. I would like to use my customized detect.py file which was set already, in the function def parse_opt() and run inferences in the location specified there.
So my question is how can I execute YOLO detector from another script and retrieve the outputs for every frame?
Thank you for your help

Additional

Windows
Python 3

@lessleeb lessleeb added the question Further information is requested label Jun 1, 2022
@glenn-jocher
Copy link
Member

detect.run()

@lessleeb
Copy link
Author

lessleeb commented Jun 1, 2022

I see.
So now I can run the detect.py, how I could iteratively retrieve the outputs for each frame to the main script I have?
I would like to return xywh
I hope you can provide me some clues. Thank you

@glenn-jocher
Copy link
Member

@lessleeb I'd recommend using PyTorch Hub inference

@lessleeb
Copy link
Author

lessleeb commented Jun 2, 2022

@glenn-jocher thank you.
I was checking out the torch.hub.load method more carefully. I am sorry, but I do not understand some things yet.
I guess I can set the model as a local repo/ local model and I can use a data loader to read my dataset. But how could I customize : the confidence threshold, NMS IoU threshold and maximum number of detections in this case?
If I set as a "local repo" model I think it is loading from my local repository but from where it is returning the detections?
I would like to understand this more, thank you!

import torch

#Model
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local') # local repo # custom

#Images
img = DataLoader()

#Inference
results = model(img)

#Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
results.xywh[0]

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 2, 2022

@lessleeb see PyTorch Hub tutorial below for details on customizing inference parameters.

Screen Shot 2022-06-02 at 1 51 54 PM

YOLOv5 Tutorials

Good luck 🍀 and let us know if you have any other questions!

@lessleeb
Copy link
Author

lessleeb commented Jun 3, 2022

@glenn-jocher great, thank you!

@lessleeb lessleeb closed this as completed Jun 3, 2022
@glenn-jocher
Copy link
Member

@lessleeb you're welcome! If you have any more questions or need further assistance, feel free to ask. Good luck with your project!

@HieuHQ112
Copy link

I have a question:
the img = DataLoader(), is dataloader.py you mentioned dataesets.py on V5-6.1 version?
I run YOLOv5 version 6.1 on Jetson Nano and want to write a script to use the detect function and take output from it such as: how many people did the model detect
Can you give me some details.
Thank you

@glenn-jocher
Copy link
Member

Hello @HieuHQ112,

Thank you for your question!

To clarify, the DataLoader you mentioned is typically used for loading datasets in PyTorch. However, for inference with YOLOv5, you can directly pass images to the model without needing a DataLoader. Here's a simple example to help you get started:

  1. Loading the Model:

    import torch
    
    # Load the model
    model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt', source='local')
  2. Running Inference:

    # Image
    img = 'path/to/your/image.jpg'  # or a list of images
    
    # Inference
    results = model(img)
  3. Customizing Inference Parameters:
    You can set various inference parameters such as confidence threshold, IoU threshold, and maximum number of detections:

    model.conf = 0.25  # NMS confidence threshold
    model.iou = 0.45  # NMS IoU threshold
    model.max_det = 1000  # maximum number of detections per image
  4. Extracting Results:
    To get the number of people detected:

    # Assuming 'person' class is class 0 in your model
    person_detections = results.pandas().xyxy[0]
    num_people = len(person_detections[person_detections['class'] == 0])
    print(f'Number of people detected: {num_people}')

For more detailed information, you can refer to the PyTorch Hub tutorial.

If you encounter any issues, please ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5. If the problem persists, kindly provide a minimum reproducible code example as outlined here. This will help us investigate and provide a solution more effectively.

Good luck with your project on the Jetson Nano! If you have any further questions, feel free to ask.

@HieuHQ112
Copy link

@glenn-jocher thank you,
I have created one and it worked well
How about video or streaming input from camera. How can I add source from those into the script? My version is v5-6.1 and it does not contain dataloader.py, it has datasets.py.

@glenn-jocher
Copy link
Member

Hello @HieuHQ112,

Great to hear that your script is working well with images! For video or streaming input from a camera, you can use OpenCV to capture frames and then pass them to the YOLOv5 model for inference. Here's a step-by-step guide to help you get started:

  1. Install OpenCV:
    Ensure you have OpenCV installed in your environment:

    pip install opencv-python
  2. Capture Video or Stream from Camera:
    Use OpenCV to capture video frames from a file or a camera stream:

    import cv2
    import torch
    
    # Load the model
    model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt', source='local')
    
    # Open video file or capture device (0 for the first camera)
    cap = cv2.VideoCapture('path/to/video.mp4')  # or cap = cv2.VideoCapture(0) for webcam
    
    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
    
        # Inference
        results = model(frame)
    
        # Process results
        results.print()  # or .show(), .save(), .crop(), .pandas(), etc.
    
        # Display the frame with results
        cv2.imshow('YOLOv5 Inference', results.render()[0])
    
        # Break the loop on 'q' key press
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    
    cap.release()
    cv2.destroyAllWindows()
  3. Customizing Inference Parameters:
    You can set various inference parameters such as confidence threshold, IoU threshold, and maximum number of detections as shown in the previous examples.

This script will capture frames from a video file or a webcam, run YOLOv5 inference on each frame, and display the results in real-time. You can modify the script to suit your specific needs, such as saving the output or performing additional post-processing.

If you encounter any issues, please ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5. If the problem persists, kindly provide a minimum reproducible code example as outlined here. This will help us investigate and provide a solution more effectively.

Good luck with your project, and feel free to reach out if you have any further questions! 😊

@HieuHQ112
Copy link

@glenn-jocher it worked well on Jetson even with CSI camera. Thank you so much!!!

@glenn-jocher
Copy link
Member

Hello @HieuHQ112,

That's fantastic to hear! 🎉 We're thrilled that YOLOv5 is working well on your Jetson Nano with the CSI camera. The Ultralytics team and the broader YOLO community are always here to support your AI journey.

If you have any more questions or need further assistance, feel free to ask. Keep up the great work, and happy detecting! 😊

@HieuHQ112
Copy link

Hello @glenn-jocher,
Can I save the video with bounding box result as after having the detection done. The results above are frames and I want to save another video result just like when I use detection in terminal. Thank you

@glenn-jocher
Copy link
Member

Hello @HieuHQ112,

Absolutely, you can save the video with bounding box results after detection. Here's how you can modify your existing script to save the processed frames into a video file using OpenCV:

  1. Set Up Video Writer:
    Before the while loop, set up the cv2.VideoWriter to save the output video:

    import cv2
    import torch
    
    # Load the model
    model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt', source='local')
    
    # Open video file or capture device (0 for the first camera)
    cap = cv2.VideoCapture('path/to/video.mp4')  # or cap = cv2.VideoCapture(0) for webcam
    
    # Get video properties
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    fps = int(cap.get(cv2.CAP_PROP_FPS))
    
    # Define the codec and create VideoWriter object
    out = cv2.VideoWriter('output_video.avi', cv2.VideoWriter_fourcc(*'XVID'), fps, (width, height))
  2. Process and Save Each Frame:
    Inside the while loop, write the processed frame to the output video:

    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
    
        # Inference
        results = model(frame)
    
        # Render results
        rendered_frame = results.render()[0]
    
        # Write the frame into the file
        out.write(rendered_frame)
    
        # Display the frame with results (optional)
        cv2.imshow('YOLOv5 Inference', rendered_frame)
    
        # Break the loop on 'q' key press
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    
    # Release everything if job is finished
    cap.release()
    out.release()
    cv2.destroyAllWindows()

This script will save the video with bounding box results to output_video.avi. You can change the codec and file extension to match your desired output format.

If you encounter any issues, please ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5. If the problem persists, kindly provide a minimum reproducible code example as outlined here. This will help us investigate and provide a solution more effectively.

Feel free to reach out if you have any further questions. Happy coding! 😊

@HieuHQ112
Copy link

HieuHQ112 commented Jul 16, 2024

thank you @glenn-jocher,
I followed the code you gave me but I could not find the output video anywhere. Can you show me where is the line that will save the video in your code.
1 more thing is how can I change where to save the result. When running in CMD, I can change the --project to change the directory of the result, but I can not do it when I add the model.project = "my-directory". I want to change it to the mount google drive in Linux Ubuntu. Thank you

@glenn-jocher
Copy link
Member

Hello @HieuHQ112,

Thank you for your feedback! I'm glad to assist further.

  1. Saving the Output Video:
    The line that saves the video in the provided code is:

    out.write(rendered_frame)

    This writes each processed frame to the output_video.avi file. The file will be saved in the same directory where you run your script unless you specify a different path.

    To change the save location, simply modify the output_video.avi path:

    out = cv2.VideoWriter('/path/to/your/directory/output_video.avi', cv2.VideoWriter_fourcc(*'XVID'), fps, (width, height))
  2. Changing the Save Directory:
    To save the results to a specific directory, such as a mounted Google Drive, you can set the path accordingly. For example:

    output_path = '/content/drive/MyDrive/YOLOv5_results/output_video.avi'
    out = cv2.VideoWriter(output_path, cv2.VideoWriter_fourcc(*'XVID'), fps, (width, height))

    Ensure that the directory exists and is writable. If you're using Google Colab, you can mount your Google Drive like this:

    from google.colab import drive
    drive.mount('/content/drive')

Here's the updated code snippet with these changes:

import cv2
import torch

# Load the model
model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt', source='local')

# Open video file or capture device (0 for the first camera)
cap = cv2.VideoCapture('path/to/video.mp4')  # or cap = cv2.VideoCapture(0) for webcam

# Get video properties
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Define the codec and create VideoWriter object
output_path = '/content/drive/MyDrive/YOLOv5_results/output_video.avi'
out = cv2.VideoWriter(output_path, cv2.VideoWriter_fourcc(*'XVID'), fps, (width, height))

while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break

    # Inference
    results = model(frame)

    # Render results
    rendered_frame = results.render()[0]

    # Write the frame into the file
    out.write(rendered_frame)

    # Display the frame with results (optional)
    cv2.imshow('YOLOv5 Inference', rendered_frame)

    # Break the loop on 'q' key press
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()

If you encounter any issues, please ensure you are using the latest versions of torch and https://github.com/ultralytics/yolov5. If the problem persists, kindly provide a minimum reproducible code example as outlined here. This will help us investigate and provide a solution more effectively.

Feel free to reach out if you have any further questions. Happy coding! 😊

@HieuHQ112
Copy link

HieuHQ112 commented Jul 17, 2024

@glenn-jocher thank you, I'wll try it.
I have one question, my old Linux image on jetson work well with the code above but my new one continue downloading the model even when I use source = 'local'. Can you give me some tips. Thank you so much.
Screenshot_4

@glenn-jocher
Copy link
Member

Hello @HieuHQ112,

Thank you for your message and for sharing the screenshot!

It sounds like the issue might be related to the environment setup on your new Linux image. Here are a few tips to ensure that the model is loaded from the local source correctly:

  1. Verify Model Path:
    Ensure that the path to your local model is correct and accessible. Double-check the file path and permissions:

    model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt', source='local')
  2. Check YOLOv5 Version:
    Make sure you are using the latest version of YOLOv5. Sometimes, updates can resolve unexpected behaviors. You can update YOLOv5 by navigating to your YOLOv5 directory and pulling the latest changes:

    git pull
  3. Environment Consistency:
    Ensure that your new Linux image has the same versions of dependencies as your old one. You can create a requirements.txt file from your old environment and install it in your new one:

    pip freeze > requirements.txt
    pip install -r requirements.txt
  4. Local Model Loading:
    If the model continues to download despite specifying source='local', you can manually load the model using torch.load:

    import torch
    from models.common import DetectMultiBackend
    
    # Load the model manually
    model = DetectMultiBackend('path/to/best.pt')
  5. Network Configuration:
    Sometimes, network configurations or firewalls can cause issues with downloading models. Ensure that your network settings are consistent with your old setup.

If the issue persists, please provide additional details about any error messages or logs you encounter. This will help us diagnose the problem more effectively.

Thank you for your patience and understanding. If you have any further questions, feel free to ask. Happy coding! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants