Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'NoneType' object has no attribute 'xyxy' #743

Open
1 task done
Aamnastressed2 opened this issue Jun 26, 2024 · 2 comments
Open
1 task done

'NoneType' object has no attribute 'xyxy' #743

Aamnastressed2 opened this issue Jun 26, 2024 · 2 comments
Labels
question A HUB question that does not involve a bug

Comments

@Aamnastressed2
Copy link

Search before asking

Question

i trained the dotav8 model using ultralytics hub google colab option, the model was trained successfully on colab but when i used the same weight file in my code i got 'NoneType' object has no attribute 'xyxy', whereas my code was running fine for yolov8s.pt file but when i ran the same code for the best.pt, it gave me error. why is it? i have also added the script file in additional info.

Additional

from ultralytics import YOLO
import cv2
import math
import serial
import time
from ultralytics.solutions import distance_calculation
from ultralytics.utils.plotting import Annotator, colors

load yolov8 model

model = YOLO('best.pt')
video_path = 'height.mp4'
cap = cv2.VideoCapture(video_path)

cap=cv2.VideoCapture(0)

assert cap.isOpened(), "Error opening video stream or file"

Get video properties: width, height, and frames per second

w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

Create VideoWriter object to save the processed video

out = cv2.VideoWriter('visioneye-distance-calculation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))

Define the center point of the vision eye and pixels per meter

center_point = (0, h)
pixel_per_meter = 852

Known height of the object in meters (e.g., a bottle)

actual_height_meters = 0.25 # Example: 25 cm bottle height

Camera focal length in pixels (this value needs to be calibrated for your camera)

focal_length_pixels = 400 # Example value, this needs to be calibrated for your camera

Function to calculate the distance from the camera to the object

def calculate_distance(actual_height, focal_length, pixel_height):
return (actual_height * focal_length) / pixel_height

Define colors for text, text background, and bounding box

txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))

# Initialize serial port

SERIAL_PORT = 'COM3' # Change as needed

BAUD_RATE = 9600

def initialize_serial(port, baud_rate):

ser = serial.Serial(port, baud_rate, timeout=1)

time.sleep(2) # Wait for the serial connection to initialize

return ser

def send_serial_data(serial_connection, data):

if serial_connection.is_open:

print(f"Sending data: {data}")

serial_connection.write(data.encode())

serial_connection = initialize_serial(SERIAL_PORT, BAUD_RATE)

Main loop for processing each frame of the video

while True:
# Read a frame from the video
ret, im0 = cap.read()
if not ret:
# Break the loop if the video frame is empty or processing is complete
print("Video frame is empty or video processing has been successfully completed.")
break

# Create Annotator object to annotate the frame
annotator = Annotator(im0, line_width=1)

# Perform object detection and tracking using YOLO model
results = model.track(im0, persist=True)
boxes = results[0].boxes.xyxy.cpu()

if results[0].boxes.id is not None:
    # Get the track IDs
    track_ids = results[0].boxes.id.int().cpu().tolist()

    # Loop through detected objects and their track IDs
    for box, track_id in zip(boxes, track_ids):
        # Annotate bounding boxes and track IDs on the frame
        # annotator.box_label(box, label=str(track_id), color=bbox_clr)
        # annotator.visioneye(box, center_point)

        # Calculate the height of the bounding box in pixels
        pixel_height = int(box[3] - box[1])

        # Calculate the distance to the object
        distance = calculate_distance(actual_height_meters, focal_length_pixels, pixel_height)

        # Draw bounding box and distance label
        annotator.box_label(box, label=f"Distance: {distance:.2f} m", color=(255, 255, 50))


        # # Add text displaying the distance on the frame
        # text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX, 1.2, 3)
        # cv2.rectangle(im0, (x1, y1 - text_size[1] - 10), (x1 + text_size[0] + 10, y1), txt_background, -1)
        # cv2.putText(im0, f"Distance: {distance:.2f} m", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2, txt_color, 3)

        # Send distance data to serial port
        # send_serial_data(serial_connection, f"ID: {track_id}, Distance: {distance:.2f} m\n")

# Write the annotated frame to the output video
out.write(im0)
# Display the annotated frame
cv2.imshow("visioneye-distance-calculation", im0)

# Check for 'q' key press to exit the loop
if cv2.waitKey(1) & 0xFF == ord('q'):
    break

Release video capture and video writer objects

out.release()
cap.release()

Close serial connection

serial_connection.close()

Close all OpenCV windows

cv2.destroyAllWindows()

@Aamnastressed2 Aamnastressed2 added the question A HUB question that does not involve a bug label Jun 26, 2024
Copy link

👋 Hello @Aamnastressed2, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

@pderrenger
Copy link
Member

Hi there,

Thank you for reaching out and providing detailed information about your issue. It seems like you're encountering an attribute error when using the best.pt model file. This error typically occurs when the results object is None, which could be due to several reasons.

To help us diagnose the issue more effectively, could you please provide a minimum reproducible example? This will allow us to better understand the context and pinpoint the problem. You can find more information on how to create a minimum reproducible example here.

In the meantime, here are a few steps you can take to troubleshoot the issue:

  1. Verify Package Versions: Ensure that you are using the latest versions of the Ultralytics packages. Sometimes, bugs are fixed in newer releases, so updating might resolve your issue.

  2. Check Model Compatibility: The best.pt file might have been trained with a different configuration or version. Ensure that the model file is compatible with the current version of the Ultralytics package you are using.

  3. Debugging the Results: Add a check to see if results is None before accessing its attributes. This can help you identify if the model is failing to produce results for some reason.

Here's a modified snippet of your code with an added check:

# Perform object detection and tracking using YOLO model
results = model.track(im0, persist=True)

if results and results[0].boxes:
    boxes = results[0].boxes.xyxy.cpu()

    if results[0].boxes.id is not None:
        # Get the track IDs
        track_ids = results[0].boxes.id.int().cpu().tolist()

        # Loop through detected objects and their track IDs
        for box, track_id in zip(boxes, track_ids):
            # Annotate bounding boxes and track IDs on the frame
            # annotator.box_label(box, label=str(track_id), color=bbox_clr)
            # annotator.visioneye(box, center_point)

            # Calculate the height of the bounding box in pixels
            pixel_height = int(box[3] - box[1])

            # Calculate the distance to the object
            distance = calculate_distance(actual_height_meters, focal_length_pixels, pixel_height)

            # Draw bounding box and distance label
            annotator.box_label(box, label=f"Distance: {distance:.2f} m", color=(255, 255, 50))
else:
    print("No detections were made.")

This check ensures that you only proceed if results is not None and contains valid boxes.

Please try these steps and let us know if the issue persists. Your collaboration helps improve the YOLO community and the Ultralytics team. 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A HUB question that does not involve a bug
Projects
None yet
Development

No branches or pull requests

2 participants