Replies: 1 comment 1 reply
-
@pasmai validation set is used during training. Test set is not supposed to be used during training, only after training is complete for an independent evaluation of your metrics. You can evaluate a trained model on your test set with python val.py --weights path/to/best.pt --data path/to/data.yaml --task test |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I divided the data in my .yaml file into the 3 categories:
Using
train.py
with the Weights & Bias overview I can only see charts for training and validation. I would like to see what the precision/recall/loss on the test dataset is.To figure out if I run into any issues with overfitting or how many epochs make sense to run.
So essentially these graphs run with the same model but on the test data instead of the training:
Can anybody help me out on how to achieve this?
Cheers
Beta Was this translation helpful? Give feedback.
All reactions