-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to interpret VNCoreMLFeatureValueObservation to get bounding boxes #1575
Comments
Hello @maidmehic, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@maidmehic I had this same question and looking through the posted issues it is very common. This comment describes what is going on much better than I could. Long story short, the way the current CoreML export function works it skips the conversion of some of the detect layers which transform the raw output to bounding boxes, confidences, etc. So if you want to use the CoreML model you have to do the post-processing (anchor boxes offset correction, NMS, etc.) yourself. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
❔Question
Hi everyone,
I've been struggling for days now trying to interpret yolov5s Vision request output that is an array of
VNCoreMLFeatureValueObservation
. Every element of this array contains itsMLMultiArray
, so the results are three MLMultiArrays below:I've found this solution that takes the first multiarray as coordinates and the second one as confidence, but that doesn't seem to give expected results: https://apple.github.io/turicreate/docs/userguide/object_detection/export-coreml.html
Also, Apple documentation says that Vision output for the object detection should be
VNRecognizedObjectObservation
, but there is no way to convert yolo5 model to CoreML that would give us this type of output.Any comment would be immensely helpful.
Thanks!
The text was updated successfully, but these errors were encountered: