Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test.py why map of val_data is better than test_data? #2352

Closed
ggyybb opened this issue Mar 3, 2021 · 3 comments
Closed

test.py why map of val_data is better than test_data? #2352

ggyybb opened this issue Mar 3, 2021 · 3 comments
Labels
question Further information is requested

Comments

@ggyybb
Copy link

ggyybb commented Mar 3, 2021

❔Question

test.py why map of val_data is better than test_data?

Additional context

hello,when i train myown data, I found that the map of val_data is better than test_data always,why?
Is val set used in training?
Should I use test set to evaluate my model, or val set or both?
Actually, I trained two models ,one is 94.3map on val and 93map on test,the other is 93.4map on val and 93.1map on test.
The former have more parameters like yolov5-p2.yaml,the latter have less parameter like yolov5.yaml origin.
Should I train the more parameter model more epochs? or use the latter model directly?

@ggyybb ggyybb added the question Further information is requested label Mar 3, 2021
@ggyybb
Copy link
Author

ggyybb commented Mar 3, 2021

confusion_matrix
top_right (car and backFN) box is 0.87,I think the predicted car is actually the backgroung, am i right?

@glenn-jocher
Copy link
Member

@ggyybb you can report results from either test or val or both.

Clearly one set is going to perform better than the other, it would be very unlikely for you to return identical results on different datasets.

Neither val or test sets are used in training.

@glenn-jocher
Copy link
Member

glenn-jocher commented Mar 3, 2021

@ggyybb about your confusion matrix, we just had a recent PR #2114 fix for this. You may want to git pull and then retest to get an updated confusion matrix.

P2 models have a P2/4 output layer, which will be better for detecting very small objects when compared to the standard (P3-P5) models.

From your results it seems both of your models perform similarly well. You might also try a P6 model to compare, i.e. yolov5m6.yaml.

@ggyybb ggyybb closed this as completed Mar 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants