tasks/obb/ #7974
Replies: 34 comments 102 replies
-
Hi, when I am using yolov8_obb model in results I always have even when the model makes prediction. Did someone faced with this problem and if yes, what was the solution? Thank you.🙂 |
Beta Was this translation helpful? Give feedback.
-
Hi, how can I access the results from this new model so I can extract bounding box information |
Beta Was this translation helpful? Give feedback.
-
Hi,Could the heatmap functionality be used on YOLOv8-obb? |
Beta Was this translation helpful? Give feedback.
-
Hi,I have some questions when I use the DOTAv1.0 for OBB task. |
Beta Was this translation helpful? Give feedback.
-
how to do Model Ensembling on yolov8-obb models? |
Beta Was this translation helpful? Give feedback.
-
Hi, the YOLOv8n-obb test map-50 result on DOTAv1.0 is 78.0%. I split the train set and val set images to 1024*1024 and I wonder the 78% is the val set result given by the model.val()? or the test set result given by the DOTA server? And how to merge the result submitted to the server if the figure 78% is the test result given by the DOTA server? |
Beta Was this translation helpful? Give feedback.
-
how can i test DOTA1.0 datasets using YOLOV8-obb ? online submit for results? somebody help me? emergency |
Beta Was this translation helpful? Give feedback.
-
Hi, I saw yolov8 obb with tracking from yolov8-v2 |
Beta Was this translation helpful? Give feedback.
-
Subject: Cropping Images from YOLOv8 Detections (Python) Hi everyone, I'm working on a Python project that utilizes a YOLOv8 model for object detection. I'd like to achieve the following functionality: Perform object detection on input images using my trained YOLOv8 model. could you share an example code snippet demonstrating how to achieve this cropping functionality using the bounding box data? |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm currently working with YOLOOBB (You Only Look Once Oriented Bounding Boxes), and I've annotated my data using the YOLOOBB format. My concern now is how to apply augmentations for oriented bounding boxes, particularly for transformations like rotation, horizontal flipping, and vertical flipping. Each augmented image will need its annotation file adjusted accordingly. I'm using the albumentations library in Python for data augmentation. Here's the transformation pipeline I've defined: import albumentations as A
transform_pipeline= A.Compose([
A.Rotate(limit=(-90, 90)),
A.VerticalFlip(p=1),
A.HorizontalFlip(p=0.5),
A.ToTensor()
], bbox_params=A.BboxParams(
format="yolo",
label_fields=["class_labels"],
)) However, the YOLOOBB format is different from the default YOLO format. How should I process the annotations in this case to ensure they match the augmented images? Any insights or suggestions on how to handle this scenario would be greatly appreciated! Thank you. |
Beta Was this translation helpful? Give feedback.
-
HI, when I am training yolov8_obb models on muti-GPUs I meet an error File "/tmp/pycharm_project_538/ultralytics-main/ultralytics/utils/loss.py", line 626, in call It seems that the error is related to the calculation of the OBB loss. It expects two elements but got three. This error occurs when training on multiple GPUs, but it works fine on single GPU. How can I solve this problem?Thank you. |
Beta Was this translation helpful? Give feedback.
-
i am curious, doest it uses the same approach as yolov5-obb which use CSL for its OBB function? |
Beta Was this translation helpful? Give feedback.
-
for yolov8n-obb , results[0].boxes are returning none. previously, it was working. i think ultralytics recent update got it removed? |
Beta Was this translation helpful? Give feedback.
-
Hey, 0: 704x1024 191.9ms AttributeError: 'NoneType' object has no attribute '_jit_internal' This is my code: cap = cv2.VideoCapture("video.mp4") while True: |
Beta Was this translation helpful? Give feedback.
-
Can I use this feature with the Raspberry Pi 5's PiCamera3? |
Beta Was this translation helpful? Give feedback.
-
I am new to YOLOV8 OBB. I have used mask and detection before. I have already trained a large obb model on custom dataset. I want to test it now and I want the bounding box to be rotated as per the object and provide me the angle of rotation and the center coordinate of the detected object. I would also really appreciate if you could provide me the code for that. I will want to use this format: xywhr Thank you |
Beta Was this translation helpful? Give feedback.
-
Hello, I have some questions about the OBB detection codes in tal.py/dist2rbox. |
Beta Was this translation helpful? Give feedback.
-
def dist2rbox(pred_dist, pred_angle, anchor_points, dim=-1):
The above code is about the decode part. I don't know the variables "xf" and "yf" meaning and how to understand the transform from xf, yf to x, y. Could you give the illustration about the question? |
Beta Was this translation helpful? Give feedback.
-
about the test mAP, How do you merge the result? and what's the nms_threshold?How do you merge the result to submit to DOTA server? since I can't find a merge script in YOLO repo, I write it based on the merge_result function in MMRotate, and use nms threshold =0.1, I can only get about 78ppt on YOLOv8m, which is about 2 ppt lower than you published. |
Beta Was this translation helpful? Give feedback.
-
The documentation has given a mAP_test(50) of 78.0 for YOLOv8n-obb. z But when I submit it on DOTAv1 Task1 Server, I get a result of only 50.8, what is the reason for this? Is there something wrong with my format conversion code? my code as follows: file_names = { Assuming yolov8n-obb.pt is your trained modelmodel = YOLO('/xiaying_ms/bp/Large-Selective-Kernel-Network/runs/obb/train10/weights/best.pt') for i in file_names: for pred in predictions: def zip_predictions_folder(folder_path):
zip_predictions_folder(sub_dir) Can you please help me check what's wrong? Or what is it that is causing this? Thank you very much. |
Beta Was this translation helpful? Give feedback.
-
yolo val obb data=DOTAv1.yaml device=0 split=test |
Beta Was this translation helpful? Give feedback.
-
Hi, I'd like to ask about training YOLOv8 OBB. Using this command Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hello, when I used obb to train a slender frame similar to 2150x200 on an image of size 5120x5120, the training size was 1024, and the pre-training weight was obb-m. I trained for 300 eopches, but the final verification result had a very low confidence. Low, almost all 0.4, and the size of many prediction boxes is only about 500x200. How should I solve it? |
Beta Was this translation helpful? Give feedback.
-
Dear YOLOv8 Team, I am working on a snake classification project and using YOLOv8 for both object detection and classification tasks. I have encountered a couple of questions and would appreciate your guidance. Cropping and Classifying Images: I am currently using YOLOv8 for object detection with the following code snippet to crop images:
This code successfully crops images. However, instead of saving these cropped images, I want to pass them directly to the classification model within the same script. Current Classification Process:
Goal: Could you please provide guidance on how to efficiently combine the object detection and classification processes within the same script? Thank you for your assistance. Best regards, |
Beta Was this translation helpful? Give feedback.
-
Hello! When I used 'xywhr' to get the angle of rotation, the angle range was only 0 to 180 degrees, which could not distinguish between the head and the tail of the object being detected. What should I have to do to get an angle from 0 to 360 degrees? Or by what means can I distinguish between the head and the tail of an object? This is an urgent question for me. I would appreciate it if you could answer it for me. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm trying to reproduce the mAP50 results on the DOTA test set. In the above table, yolov8n-obb achieves 78 mAP on the DOTA test set, but I only get 71.62 on the DOTA evaluation server. My evaluation steps are :1. split the test set images into 1024x1024; 2.run model.predict() to get predictions; 3.merge the predictions of the same large image (perform NMS again); 4. format all predictions as the DOTA evaluation required.The iou threshold is 0.4 and confidence threshold is 0.1. Could you please tell me me where my problem is and how to reproduce the 78 mAP on the DOTA test set? |
Beta Was this translation helpful? Give feedback.
-
Have you solved your problem?Sent from my iPhoneOn Aug 4, 2024, at 11:21, puppetmonkey ***@***.***> wrote:
I also need this kind of code, but I don't have the time or coding skills. If you could share the completed code, I would be very grateful.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
My current mAP is 72. What about you?Sent from my iPhoneOn Aug 4, 2024, at 11:21, puppetmonkey ***@***.***> wrote:
I also need this kind of code, but I don't have the time or coding skills. If you could share the completed code, I would be very grateful.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi! my emali address is ***@***.***
Thank you very much!
…---Original---
From: ***@***.***>
Date: Mon, Aug 5, 2024 15:57 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [ultralytics/ultralytics] tasks/obb/ (Discussion #7974)
I have already solved this problem, we can communicate. We can communicate privately. You can leave your email address here!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
tasks/obb/
Learn how to use oriented object detection models with Ultralytics YOLO. Instructions on training, validation, image prediction, and model export.
https://docs.ultralytics.com/tasks/obb/
Beta Was this translation helpful? Give feedback.
All reactions