-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About how the test results obtained by detect.py are evaluated #12574
Comments
👋 Hello @Jiase, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. RequirementsPython>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit. Introducing YOLOv8 🚀We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀! Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects. Check out our YOLOv8 Docs for details and get started with: pip install ultralytics |
@Jiase hello! Thanks for reaching out with your question. The The mAP is generally considered a more reliable indicator of model performance across multiple classes and IoU thresholds, as it averages precision over all classes and recall levels. Precision alone might not give you the full picture, as it doesn't account for the true negative detections. If you find discrepancies between your script's calculations and the Keep in mind that the evaluation metrics are only as good as the ground truth data they are compared against, so ensure your test dataset is well-annotated and representative of the problem space. Happy coding! 😊 |
Thank you for your answer, I would also like to understand what I should do when I am interested in knowing how well each video result is detected in the inference results. |
You're welcome, @Jiase! If you're interested in evaluating the performance on each video individually, you could follow these steps:
Remember, the key to meaningful evaluation is having accurate ground truth annotations for your videos. Without them, you won't be able to objectively assess the model's performance. If you need guidance on how to structure your evaluation script or modify |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Hi, can you please share the code script to calculate the map after detect? Thank you! |
Hello! To calculate the mAP after running detection with If you're specifically looking to calculate mAP from detections, you'd need to:
Happy coding! 😊 |
Search before asking
Question
I used detect.py to get the test results and then wrote my own script to calculate the precision, but by chance I realized that my calculations were inconsistent with the evaluation results using val.py when I evaluated the test data using val.py. I carefully tested the script I wrote myself and I think it is consistent with the precision definition. I am confused and which is more convincing proof of the reliability of the model in the printout of val.py, precision or map?
Additional
No response
The text was updated successfully, but these errors were encountered: