Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding precision,recalland map50 metric #12891

Closed
1 task done
KAKAROT12419 opened this issue Apr 6, 2024 · 8 comments
Closed
1 task done

Regarding precision,recalland map50 metric #12891

KAKAROT12419 opened this issue Apr 6, 2024 · 8 comments
Labels
question Further information is requested Stale

Comments

@KAKAROT12419
Copy link

Search before asking

Question

Hello sir, I have trained theyolov5,yolov8 models on my dataset, After training i now i am trying to create ensemble of yolov5 and yolov8 models using weighted box fusion . I am getting the predicted box,predicted scores and predicted labels, Now i want to calculate the precison, recall and map50 using this information. Can you help me providing the code for these three for object detection and how precison and recall calcualtiong is different in object detection than object classification(In case any difference). kindly response sir.

Additional

No response

@KAKAROT12419 KAKAROT12419 added the question Further information is requested label Apr 6, 2024
@glenn-jocher
Copy link
Member

Hello! 😊 Great to hear you're experimenting with combining YOLOv5 and YOLOv8 models using weighted box fusion. Precisely measuring the model performance is key to understanding how well your ensemble method is working.

For calculating precision, recall, and mAP (mean Average Precision) at IoU threshold 0.5 (mAP@.5) after an ensemble operation like weighted box fusion, you can leverage the val.py script provided in our repo. Here's how you can do it:

  1. Ensure your dataset is in a supported format (e.g., COCO).
  2. Use your ensemble predictions (in the correct format) as the input to val.py.

A simple command to do this would look something like:

python val.py --weights yolov5_model.pt yolov8_model.pt --data your_dataset.yaml --iou-thres 0.5

Regarding the difference between precision and recall calculations in object detection vs. classification: In classification, each prediction is simply right or wrong, making precision and recall straightforward to compute. In object detection, however, precision and recall are calculated based on the Intersection Over Union (IoU) between predicted bounding boxes and ground truth, considering both the location and the class of the objects. For an object to be considered correctly detected (True Positive), its predicted bounding box needs to have an IoU above a certain threshold with a ground truth box, and the class must match.

Remember, this is a simplified explanation; the actual implementation considers multiple factors like different IoU thresholds and handling multiple detections of the same object.

For more detailed information and guidelines, please refer to our documentation at https://docs.ultralytics.com/yolov5/. Keep pushing the boundaries, and happy modeling! 🚀

@KAKAROT12419
Copy link
Author

sir i don't think so that we can make ensemble of yolov5 and yolov8 by using val.py file

@glenn-jocher
Copy link
Member

@KAKAROT12419 hello! 😊 You're right, and I appreciate your attention to details. My earlier response was a bit misleading on that part. For ensembling YOLOv5 and YOLOv8 models, you'd typically perform model predictions separately and then apply an ensemble method like Weighted Box Fusion on the prediction outputs.

Here’s a brief example of how you might approach it:

  1. Generate predictions from each model.
  2. Apply Weighted Box Fusion or any ensemble method on these predictions.

The ensembling process itself would happen post-prediction and isn't a direct feature of the val.py script. My apologies for any confusion, and thank you for bringing this up! Keep experimenting and sharing your insights. 🚀

@KAKAROT12419
Copy link
Author

Can you provide me with code of precision,recall and map50.

@glenn-jocher
Copy link
Member

Hello! 😊 For calculating precision, recall, and mAP@.5 with YOLOv5, you don't need separate code. These metrics are automatically computed during validation if you use the val.py script on your dataset.

Here's how you can do it briefly:

python val.py --weights your_trained_model.pt --data your_dataset.yaml

This command will evaluate your model on the specified dataset and output the precision, recall, and mAP@.5 among other metrics. Make sure your dataset is properly formatted and your_dataset.yaml points to the right paths.

Happy coding! 🚀

@KAKAROT12419
Copy link
Author

okay sir thankyou

@glenn-jocher
Copy link
Member

You're welcome! 😊 If you have any more questions or need further assistance, feel free to ask. Happy coding! 🚀

Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label May 11, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants