Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the validation set used directly during training ? #10822

Closed
1 task done
ervgan opened this issue Jan 24, 2023 · 9 comments
Closed
1 task done

Is the validation set used directly during training ? #10822

ervgan opened this issue Jan 24, 2023 · 9 comments
Labels
question Further information is requested Stale

Comments

@ervgan
Copy link

ervgan commented Jan 24, 2023

Search before asking

Question

Hello,

I have noticed that when training my dataset with train.py, the code also scans my val dataset:
val: Scanning /content/drive/MyDrive/Colab_Notebooks/Datasets/labels/val.cache... 61 images, 0 backgrounds, 0 corrupt: 100% 61/61 [00:00<?, ?it/s]

so is the validation set already used during training to fine tune hyper parameters ? if so, what's the point of val.py ?

Many thanks !

Additional

No response

@ervgan ervgan added the question Further information is requested label Jan 24, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Jan 24, 2023

👋 Hello @ervgan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@JustasBart
Copy link

Hi, I believe it calls val.py at the end of every epoch for you, but you can also do it manually later on if you want to get the stats of your model.

And also yes, of course it uses the validation set directly during training!

Good luck! 🚀

@github-actions
Copy link
Contributor

github-actions bot commented Feb 27, 2023

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale label Feb 27, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 10, 2023
@EdjeElectronics
Copy link

Per issue #6023 , the validation set is not actually used to adjust hyperparameters during training. It is only used to compute metrics at the end of each epoch. The validation set does not have any impact on the model's weights.

@glenn-jocher
Copy link
Member

@EdjeElectronics thanks for your keen observation! You're correct; the validation set is not utilized to fine-tune hyperparameters during training. It's primarily employed to calculate metrics at the end of each epoch, with no direct impact on the model's weights. If you have any further questions, feel free to ask!

@tomhoq
Copy link

tomhoq commented Jun 29, 2024

@glenn-jocher Wouldn't it be better to have a val and a test set instead of just a val set? Isn't the entire purpose of the validation set to prevent overfitting to the training data by updating the weights?

@glenn-jocher
Copy link
Member

Hi @tomhoq,

Thank you for your insightful question! Let's clarify the roles of the validation and test sets in the context of training machine learning models, specifically with YOLOv5.

Validation Set vs. Test Set

  1. Validation Set: This set is used during training to monitor the model's performance and compute metrics at the end of each epoch. It helps in tuning hyperparameters and provides an early indication of how well the model is generalizing to unseen data. However, it does not directly influence the model's weights.

  2. Test Set: This set is used after the model has been fully trained to evaluate its performance on completely unseen data. It provides an unbiased evaluation of the final model's performance.

Why Both Sets Are Important

  • Preventing Overfitting: The validation set helps in detecting overfitting during training by providing a checkpoint to evaluate the model's performance. However, it does not update the weights directly. Instead, it helps in making decisions such as early stopping or hyperparameter adjustments.

  • Final Evaluation: The test set is crucial for assessing the model's true performance. Since it is not used during training or validation, it offers a final, unbiased evaluation of the model's ability to generalize.

Implementation in YOLOv5

In YOLOv5, the validation set is used to compute metrics like mAP (mean Average Precision) at the end of each epoch, which helps in monitoring the training process. If you want to include a test set for final evaluation, you can split your dataset accordingly and use the val.py script to evaluate your model on the test set after training.

Here's a brief example of how you might structure your dataset:

dataset/
├── train/
│   ├── images/
│   └── labels/
├── val/
│   ├── images/
│   └── labels/
└── test/
    ├── images/
    └── labels/

After training, you can run:

python val.py --data dataset.yaml --weights best.pt --task test

This will evaluate your trained model on the test set.

Next Steps

If you haven't already, please ensure you are using the latest versions of torch and YOLOv5 from our repository. If you encounter any issues, providing a minimum reproducible example will help us investigate further. You can find more details on creating one here.

Thank you for your engagement and contributions to the YOLO community! If you have any more questions or need further assistance, feel free to ask.

@tomhoq
Copy link

tomhoq commented Jun 29, 2024

@glenn-jocher thank you for the quick answer!

So the model won't learn from the validation set correct? This means that, in theory, evaluating the performance with the validation set or with the testing set, imagining they have similar distributions, would reward similar results?

@glenn-jocher
Copy link
Member

Hi @tomhoq,

You're absolutely right! The model does not learn from the validation set. Instead, the validation set is used to monitor the model's performance during training by computing metrics at the end of each epoch. This helps in detecting issues like overfitting and underfitting without influencing the model's weights.

Validation Set vs. Test Set

  • Validation Set: Used during training to provide feedback on the model's performance. It helps in tuning hyperparameters and making decisions such as early stopping. However, it does not directly affect the training process or update the model's weights.

  • Test Set: Used after the model has been fully trained to evaluate its performance on completely unseen data. It provides an unbiased assessment of the model's generalization capabilities.

Similar Distributions

If the validation and test sets have similar distributions, you can expect the performance metrics (such as accuracy, precision, recall, and mAP) to be similar. However, it's always a good practice to evaluate your model on a separate test set to ensure that the performance metrics are not biased by any specific characteristics of the validation set.

Example Workflow

  1. Training with Validation Set:

    python train.py --img 640 --epochs 50 --data dataset.yaml --weights yolov5s.pt
  2. Evaluating with Test Set:

    python val.py --data dataset.yaml --weights best.pt --task test

This way, you ensure that your model's performance is robust and generalizes well to new, unseen data.

Next Steps

If you encounter any issues or have further questions, please ensure you are using the latest versions of torch and YOLOv5 from our repository. If the issue persists, providing a minimum reproducible example will help us investigate further. You can find more details on creating one here.

Thank you for your engagement and contributions to the YOLO community! If you have any more questions or need further assistance, feel free to ask. 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

5 participants