Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about YOLOv5/YOLOv8 interpreting results #12877

Closed
1 task done
Cho-Hong-Seok opened this issue Apr 3, 2024 · 2 comments
Closed
1 task done

Questions about YOLOv5/YOLOv8 interpreting results #12877

Cho-Hong-Seok opened this issue Apr 3, 2024 · 2 comments
Labels
question Further information is requested Stale

Comments

@Cho-Hong-Seok
Copy link

Cho-Hong-Seok commented Apr 3, 2024

Search before asking

Question

@glenn-jocher
I trained the current YOLOv5 and YOLOv8 models with increasing epochs (50->100->150->200).
As a result of training, some S-models are inferior to X-models in terms of performance such as mAP/F1-score/precision, etc. compared to other models, but they detect real objects better in video and image inference. Also, there are cases where v5 models detect real objects better than v8 models. In general, I thought that the more recent the version of the model, the larger the size of the model, and the more epochs, the better the performance and inference performance, but I am wondering how to interpret the above situation.

Also, if the numerical difference in performance is on the order of 0.01, 0.1, is it significant? And if it is significant, what is the rationale behind it?
(I'm attaching the resulting data below!)

Thanks for reading my long question!

Additional

KakaoTalk_20240403_122837768
KakaoTalk_20240403_122850251

@Cho-Hong-Seok Cho-Hong-Seok added the question Further information is requested label Apr 3, 2024
@glenn-jocher
Copy link
Member

@Cho-Hong-Seok hi there! 🌟 Thanks for reaching out with your insightful observations and question.

It's indeed common to expect newer models and longer training (more epochs) to generally perform better. However, the real-world performance of these models (e.g., YOLOv5 vs. YOLOv8, small vs. large models) can vary based on several factors such as the dataset characteristics, the variation in objects sizes, and distribution of classes. Smaller models like S-models might be quicker to adapt to specific traits in your data, which might explain why they sometimes outperform larger models in practical scenarios despite having lower overall metrics like mAP or F1-score.

Concerning the significance of differences like 0.01 or 0.1 in performance metrics, it really depends on the context of your application. For some applications, a 0.1 difference in mAP could mean a considerable improvement in detecting critical objects, while for others, it might not be practically significant. Typically, larger differences are more likely to be meaningful, but always consider the specific requirements and constraints of your application.

Always remember that the choice of model and the interpretation of results should be guided by your specific use case, the nature of your dataset, and the importance of speed vs. accuracy in your application.

Keep experimenting with different models and epochs to find the best fit for your needs. And remember, the Ultralytics team is here to support you in navigating these challenges. For more insights and guidance on model selection and interpretation of results, our documentation might offer additional help. Happy detecting! 🚀

Copy link
Contributor

github-actions bot commented May 4, 2024

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label May 4, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants