Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

val.py script #11879

Closed
1 task done
mahilaMoghadami opened this issue Jul 18, 2023 · 6 comments
Closed
1 task done

val.py script #11879

mahilaMoghadami opened this issue Jul 18, 2023 · 6 comments
Labels
question Further information is requested Stale

Comments

@mahilaMoghadami
Copy link

Search before asking

Question

hello
Im using YOLOv5s and VisDrone dataset.
By checking JSON and txt predicted results based on val.py script, I understood that number of txt files and also the number of image_IDs in the JSON predicted file is not matched with the number of val set which is loaded.

this is my specific command Im using for validation:
python val.py --data 'data/VisDrone.yaml' --weights '/media/2TB_1/moghadami/YOLO/yolov5/runs/train/FineTune-1088Patch-50/weights/best.pt' --img 1088 --conf-thres 0.1 --task test --save-json --save-txt

I use --save-txt and --save-json arguments to generate prediction results.

this is number of val set images which is loaded :
image
image

and this is number of generated txt file is 490, 463,461,... in each run which is far from 548 loaded images.
image
image
image

Thanks

Additional

No response

@mahilaMoghadami mahilaMoghadami added the question Further information is requested label Jul 18, 2023
@glenn-jocher
Copy link
Member

@mahilaMoghadami hi,

Thank you for reaching out and providing the details of your issue.

The mismatch between the number of loaded validation images and the generated prediction files could be due to various reasons. One possibility is that some of the images in your validation set may not have any corresponding objects that meet the confidence threshold you specified (--conf-thres 0.1), resulting in no prediction files being generated for those images.

To investigate further, you can try increasing the confidence threshold to see if that affects the number of prediction files generated. Additionally, you can manually check a few images from your validation set to verify if there are any relevant objects present.

If you believe this issue is not related to the confidence threshold or the absence of objects in the images, please provide more information or steps to reproduce the issue so that we can assist you better.

Also, please make sure you are using the latest version of YOLOv5 and have checked for any updates or related issues in the YOLOv5 repository issues and discussions.

Let us know if you have any further questions or concerns. We're here to help!

Best regards,

@mahilaMoghadami
Copy link
Author

Hello
thanks for response.

Im almost believe that this problem is not related to confidence threshold. I tested different values and Im sure that all images have at least more than 20 objects.
I repeat that both the JSON file and txt predicted files have problems.

How reliable are the results (mAP) reported by running val.py script.

@glenn-jocher
Copy link
Member

@mahilaMoghadami hello,

Thank you for your follow-up.

The mAP (mean average precision) results reported by running the val.py script are reliable and accurate. The script uses the predicted bounding boxes to calculate the precision and recall metrics for each class, and then computes the average precision over all classes.

However, it's worth noting that the mAP results are specific to the dataset and the evaluation criteria used. Different datasets and evaluation setups may produce varying results. Therefore, it's important to ensure that you are using an appropriate dataset and evaluation methodology that aligns with your specific task and requirements.

Moreover, the performance of the model can also be influenced by factors such as training data quality, model architecture, hyperparameter settings, and the specific objects present in your dataset. Therefore, it's a good practice to perform extensive evaluation and analysis to understand the strengths and limitations of your model's performance.

If you have any further questions or concerns, please feel free to ask. We're here to assist you.

Kind regards,

@mahilaMoghadami
Copy link
Author

Thank you.
problem with txt and JSON predicted results and this mismatch between the number of loaded image and predicted results persists .

appreciate help me.

@glenn-jocher
Copy link
Member

@mahilaMoghadami hi,

Thank you for raising this issue.

The mismatch between the number of loaded images and the generated txt and JSON prediction files could be due to various reasons. One possibility is that some of the images in your set may not have any objects that meet the confidence threshold specified, resulting in no prediction files being generated for those images.

To investigate further, you can try adjusting the confidence threshold and manually checking a few images to verify if there are any relevant objects present.

Additionally, if you are using an older version of YOLOv5, it's always a good idea to update to the latest version and ensure you are using the most recent codebase.

If you require further assistance, please provide more details or steps to reproduce the issue so that we can better understand and help you resolve it.

Best regards,

@github-actions
Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Aug 20, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants