-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metric question #13162
Comments
@HuKai97 hello, Thank you for reaching out and providing detailed information about the issue you're encountering. It appears that your validation metrics are not aligning with the COCO API metrics, which are showing almost zero values. To better assist you, could you please provide a minimal reproducible example of your code? This will help us investigate the issue more effectively. You can refer to our guide on creating a minimal reproducible example here: Minimum Reproducible Example. In the meantime, please ensure that you are using the latest versions of git pull # update YOLOv5 repo
pip install -r requirements.txt # update dependencies Additionally, the warning message Here's a quick example of how you might adjust the confidence threshold: python val.py --weights yolov5s.pt --data coco.yaml --img 640 --conf-thres 0.001 --iou-thres 0.45 --max-det 300 --device 0 --save-json This should help ensure that the confidence threshold is not impacting your results. Please let us know if the issue persists after trying these steps, and don't hesitate to share the minimal reproducible example for further investigation. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
YOLOv5 Component
No response
Bug
(yolov5) F:\Tensorrt\yolov5>python val.py
val: data=data\coco.yaml, weights=yolov5s.pt, batch_size=4, imgsz=640, conf_thres=0.1, iou_thres=0.45, max_det=300, task=val, device=0, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs\val, name=exp, exist_ok=False, half=False, dnn=False
WARNING confidence threshold 0.1 > 0.001 produces invalid results
YOLOv5 v7.0-334-g100a423b Python-3.10.13 torch-2.1.0+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Ti, 8192MiB)
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
val: Scanning F:\LSR\datasets\coco\labels\val2017... 4952 images, 48 backgrounds, 0 corrupt: 100%|██████████| 5000/5000 [00:09<00:00, 503.19it/s]
val: WARNING Cache directory F:\LSR\datasets\coco\labels is not writeable: [WinError 183] : 'F:\LSR\datasets\coco\labels\val2017.cache.npy' -> 'F:\LSR\datasets\coco\labels\val2017.cache'
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 1250/1250 [00:49<00:00, 25.46it/s]
all 5000 36335 0.661 0.525 0.597 0.412
Speed: 0.1ms pre-process, 2.9ms inference, 1.0ms NMS per image at shape (4, 3, 640, 640)
Evaluating pycocotools mAP... saving runs\val\exp5\yolov5s_predictions.json...
loading annotations into memory...
Done (t=0.53s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.41s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=6.09s).
Accumulating evaluation results...
DONE (t=1.55s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.007
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.005
Results saved to runs\val\exp5
save_json=True, coco api metrics almost 0,but val metric is right,why?
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: