Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv5 val.py report different result by different run #11868

Closed
1 task done
mahilaMoghadami opened this issue Jul 16, 2023 · 10 comments
Closed
1 task done

YOLOv5 val.py report different result by different run #11868

mahilaMoghadami opened this issue Jul 16, 2023 · 10 comments
Labels
question Further information is requested Stale

Comments

@mahilaMoghadami
Copy link

Search before asking

Question

Hello
I get different results by running val.py with equal config (different output with each run).
this is my config:
python val.py --data 'data/VisDrone.yaml' --weights '/yolov5/runs/train/FineTune50/weights/best.pt' --img 1088 --conf-thres 0.3 --task test --save-json

also, as I checked ,I understood that some images where missed in coco prediction result (best-prediction.json file)
I understood that by printing number of unique image_ids in json predicted output file where was not equal to number of images for evaluation.

Appreciate for help me to understand.
Thank you

Additional

No response

@mahilaMoghadami mahilaMoghadami added the question Further information is requested label Jul 16, 2023
@glenn-jocher
Copy link
Member

@mahilaMoghadami hi,

The YOLOv5 val.py script can produce slightly different results with each run due to various factors such as random seed initialization and GPU memory allocation. However, the differences should be minimal.

Regarding the missing images in the prediction results, it is possible that those images did not meet the confidence threshold (--conf-thres) set in your command. If an object is detected with a confidence score below the threshold, it will not be included in the prediction results. You can try adjusting the threshold value to include more detected objects.

If the issue persists, please provide more details such as the size of your dataset, the number of missing images, and any additional information that might help us reproduce the problem.

Please keep in mind that the YOLOv5 repository is a community-driven project, and your feedback and contributions are valuable to improve its functionality.

Thank you for using YOLOv5.

Best regards,
Glenn Jocher

@mahilaMoghadami
Copy link
Author

Thank you @glenn-jocher

Even by setting different confidence thresholds, the problem persists. Im using VisDrone dataset. the number of images in the val set for evaluation is 548 which is loaded correctly. but when I checked JSON coco prediction results and print number of unique image_ids in JSON file, I understood that there is a gap between the number of loaded image and number of unique image_ids in JSON file.
As an example, with conf_thres 0.01 , number of unique image_ids is 462!!!! and it means that Almost 90 image were not appear in coco prediction result file.

Thank you for your response.

@glenn-jocher
Copy link
Member

Hi @mahilaMoghadami,

I apologize for the confusion. If you have set different confidence thresholds but the issue persists, it is indeed unusual that there is a gap between the number of loaded images and the number of unique image IDs in the JSON file.

In order to investigate further, could you please provide more information about how you generated your JSON coco prediction results? Specifically, the steps you followed and any custom code or modifications you made to the YOLOv5 codebase.

Additionally, please ensure that your dataset is properly formatted and that the images missing in the prediction results are present in the evaluation set.

Thank you for bringing this to our attention. We will do our best to assist you further once we have more information.

Kind regards,
Glenn Jocher

@mahilaMoghadami
Copy link
Author

mahilaMoghadami commented Jul 17, 2023 via email

@glenn-jocher
Copy link
Member

Hi @mahilaMoghadami,

Thank you for providing the additional information. Based on your response, it appears that you are using the --save-json argument in the val.py script to generate the COCO prediction results.

When using this argument, the script should save the predictions in COCO format for evaluation purposes. However, if there is a discrepancy between the number of loaded images and the number of unique image IDs in the generated JSON file, it could indicate an issue with the script.

To further investigate this, it would be helpful if you could provide the exact command you are running for the validation, along with any relevant details such as the version of YOLOv5 you are using and the specific dataset or dataset format you are working with.

By providing this information, we will be able to better assist you in identifying the cause of the missing images in the COCO prediction results.

Thank you for your patience, and we look forward to helping you resolve this issue.

Kind regards,
Glenn Jocher

@mahilaMoghadami
Copy link
Author

thanks for your response.
this is the exact command for validation I use:
python val.py --data 'data/VisDrone.yaml' --weights '/media/2TB_1/moghadami/YOLO/yolov5/runs/train/FineTune-1088Patch-50/weights/best.pt' --img 1088 --conf-thres 0.1 --task test --save-json

I use YOLOv5s and VisDrone is my dataset.

In addition, as I checked, even I use --save-txt for saving prediction result, txt files number isn't matched with image numbers.
548 images are loaded but only 490 txt file is generated!!!!

@glenn-jocher
Copy link
Member

@mahilaMoghadami thanks for providing the specific command you're using for validation and the details of your dataset.

The fact that the number of generated JSON files doesn't match the number of loaded images suggests a potential issue with the val.py script or the dataset itself.

To investigate further, could you please clarify if you have performed any preprocessing steps on your dataset, such as filtering or removing certain images? Additionally, please ensure that your dataset is correctly structured, following the required format for YOLOv5.

If you haven't made any modifications to the YOLOv5 codebase or preprocessing steps, it may be helpful to double-check your dataset to ensure it contains the expected number of images and corresponding annotation files.

If the issue persists, please provide more details about the VisDrone dataset, such as its composition and any preprocessing steps that you have applied.

Thank you for your patience, and we'll do our best to assist you in resolving this discrepancy.

Regards,
Glenn Jocher

@mahilaMoghadami
Copy link
Author

mahilaMoghadami commented Jul 18, 2023

Hello
no additional preprocessing I did and my dataset is correctly structured.
I was testing predicted output too and I understood that by number of generated txt file is also different in each val.py run and also its not match to image numbers.

this is my command for generating predicted predicted txt file for evaluating:
python val.py --data 'data/VisDrone.yaml' --weights '/media/2TB_1/moghadami/YOLO/yolov5/runs/train/FineTune-1088Patch-50/weights/best.pt' --img 1088 --conf-thres 0.1 --task val --save-txt

as you see in following image, 548 image is load correctly:
image
image

and number of generated txt file is 490, 463,461,... in each run which us far from 548 loaded image.
image
image
image

Thanks

@glenn-jocher
Copy link
Member

@mahilaMoghadami hello,

Thank you for providing the additional information regarding your dataset and the command you used to generate the predicted output.

Based on the images you shared, it appears that the number of generated text files does not match the number of loaded images. This discrepancy could be a result of an issue within the val.py script.

To further investigate and resolve this issue, I recommend opening a new GitHub issue on the YOLOv5 repository. Please include all relevant details, such as the specific command you used, the version of YOLOv5 you are using, and any other relevant information about your dataset.

By opening a new issue, the YOLOv5 community and developers can help assess and address the problem you are facing.

Thank you for bringing this to our attention, and we appreciate your patience.

Regards,
Glenn Jocher

@github-actions
Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Aug 18, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants