-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
val.py script #11879
Comments
@mahilaMoghadami hi, Thank you for reaching out and providing the details of your issue. The mismatch between the number of loaded validation images and the generated prediction files could be due to various reasons. One possibility is that some of the images in your validation set may not have any corresponding objects that meet the confidence threshold you specified (--conf-thres 0.1), resulting in no prediction files being generated for those images. To investigate further, you can try increasing the confidence threshold to see if that affects the number of prediction files generated. Additionally, you can manually check a few images from your validation set to verify if there are any relevant objects present. If you believe this issue is not related to the confidence threshold or the absence of objects in the images, please provide more information or steps to reproduce the issue so that we can assist you better. Also, please make sure you are using the latest version of YOLOv5 and have checked for any updates or related issues in the YOLOv5 repository issues and discussions. Let us know if you have any further questions or concerns. We're here to help! Best regards, |
Hello Im almost believe that this problem is not related to confidence threshold. I tested different values and Im sure that all images have at least more than 20 objects. How reliable are the results (mAP) reported by running val.py script. |
@mahilaMoghadami hello, Thank you for your follow-up. The mAP (mean average precision) results reported by running the val.py script are reliable and accurate. The script uses the predicted bounding boxes to calculate the precision and recall metrics for each class, and then computes the average precision over all classes. However, it's worth noting that the mAP results are specific to the dataset and the evaluation criteria used. Different datasets and evaluation setups may produce varying results. Therefore, it's important to ensure that you are using an appropriate dataset and evaluation methodology that aligns with your specific task and requirements. Moreover, the performance of the model can also be influenced by factors such as training data quality, model architecture, hyperparameter settings, and the specific objects present in your dataset. Therefore, it's a good practice to perform extensive evaluation and analysis to understand the strengths and limitations of your model's performance. If you have any further questions or concerns, please feel free to ask. We're here to assist you. Kind regards, |
Thank you. appreciate help me. |
@mahilaMoghadami hi, Thank you for raising this issue. The mismatch between the number of loaded images and the generated txt and JSON prediction files could be due to various reasons. One possibility is that some of the images in your set may not have any objects that meet the confidence threshold specified, resulting in no prediction files being generated for those images. To investigate further, you can try adjusting the confidence threshold and manually checking a few images to verify if there are any relevant objects present. Additionally, if you are using an older version of YOLOv5, it's always a good idea to update to the latest version and ensure you are using the most recent codebase. If you require further assistance, please provide more details or steps to reproduce the issue so that we can better understand and help you resolve it. Best regards, |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
hello
Im using YOLOv5s and VisDrone dataset.
By checking JSON and txt predicted results based on val.py script, I understood that number of txt files and also the number of image_IDs in the JSON predicted file is not matched with the number of val set which is loaded.
this is my specific command Im using for validation:
python val.py --data 'data/VisDrone.yaml' --weights '/media/2TB_1/moghadami/YOLO/yolov5/runs/train/FineTune-1088Patch-50/weights/best.pt' --img 1088 --conf-thres 0.1 --task test --save-json --save-txt
I use --save-txt and --save-json arguments to generate prediction results.
this is number of val set images which is loaded :
![image](https://private-user-images.githubusercontent.com/52579373/254395473-d4d21e48-019c-4284-82a2-c9149dea87ec.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMxMTQxODUsIm5iZiI6MTcyMzExMzg4NSwicGF0aCI6Ii81MjU3OTM3My8yNTQzOTU0NzMtZDRkMjFlNDgtMDE5Yy00Mjg0LTgyYTItYzkxNDlkZWE4N2VjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA4VDEwNDQ0NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWEzNjU3YjBiZTE2NWNlNWUyMmMwYWFmN2Q3MDNmMWUzN2VmMWJhMTU1NGFkODM5MmNiZGQ2NWI0MjE5OWNkMzMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.VUOfQq4ogmripfovT2ZV79mSvFcfegxIJooigvrkFMU)
![image](https://private-user-images.githubusercontent.com/52579373/254395573-8cf70494-f390-42a6-a0f6-ed24348d5cbd.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMxMTQxODUsIm5iZiI6MTcyMzExMzg4NSwicGF0aCI6Ii81MjU3OTM3My8yNTQzOTU1NzMtOGNmNzA0OTQtZjM5MC00MmE2LWEwZjYtZWQyNDM0OGQ1Y2JkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA4VDEwNDQ0NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE5MmUxYjdiMzU4YmEzMWJjZGM1OGM3NWQ4OWEzMjc1NzdiYTViNGYyMzZkNzc0ZWQxNGU1MGIxZTMwMGU2MmImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.a1y_zlhEi8SXdy8G-BRO1NdPJeYpC7sHgqAv46ZIOSM)
and this is number of generated txt file is 490, 463,461,... in each run which is far from 548 loaded images.
![image](https://private-user-images.githubusercontent.com/52579373/254395767-ba277b29-5c3a-44cf-8e1e-c32486776899.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMxMTQxODUsIm5iZiI6MTcyMzExMzg4NSwicGF0aCI6Ii81MjU3OTM3My8yNTQzOTU3NjctYmEyNzdiMjktNWMzYS00NGNmLThlMWUtYzMyNDg2Nzc2ODk5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA4VDEwNDQ0NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI4MDMyOWRkMGRkOTgyMTlmMDNkY2ZlMmE3MTkwMjA3OTRmMzdjNmNmOTc0NzM3NjMyM2QzNDEwOTQ5MWQ2NmMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.w_CgWgXVlQaZCX2if-cQsw6b2D7Mp8BTZ5Tc9N7lfSI)
![image](https://private-user-images.githubusercontent.com/52579373/254395892-fb765b7f-efd6-4b4a-bb10-2111f7b23532.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMxMTQxODUsIm5iZiI6MTcyMzExMzg4NSwicGF0aCI6Ii81MjU3OTM3My8yNTQzOTU4OTItZmI3NjViN2YtZWZkNi00YjRhLWJiMTAtMjExMWY3YjIzNTMyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA4VDEwNDQ0NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlZGQwOGI0MmMyMjFjZGY3NGNmMGRjNjM5YjI2MGM1YWVjOGI5MTFmODE2OWEwNzY2NmExOTY0ZTc0ZmM0YzgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Ls8Rv6J8deoDtCzmJN09Dy1eGknGna9MlAMdf6W3G0U)
![image](https://private-user-images.githubusercontent.com/52579373/254395989-6db1ef0f-bd7b-452a-87e7-5f006c0c342e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMxMTQxODUsIm5iZiI6MTcyMzExMzg4NSwicGF0aCI6Ii81MjU3OTM3My8yNTQzOTU5ODktNmRiMWVmMGYtYmQ3Yi00NTJhLTg3ZTctNWYwMDZjMGMzNDJlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA4VDEwNDQ0NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWYxYmQ4MjQyMGZhM2Q3OGQzY2U4NDA0M2Y0MjBiOWIyM2U3YWUyNDNlZmNhOGM2YjdmZGUyZDBlNTc2ZWQwNGEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.dbG740nSvw4xW2VFpA_sGxERx-6A-054JY0E-VJad_0)
Thanks
Additional
No response
The text was updated successfully, but these errors were encountered: