Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mAP always 0 #9045

Closed
1 task
sam-va opened this issue Aug 20, 2022 · 9 comments · Fixed by #9068
Closed
1 task

mAP always 0 #9045

sam-va opened this issue Aug 20, 2022 · 9 comments · Fixed by #9068
Labels
question Further information is requested

Comments

@sam-va
Copy link

sam-va commented Aug 20, 2022

Search before asking

Question

I Have been trying to custom train YOLOv5 model with one label and I have made suitable changes in YAML file .However,
mAP value is still 0 no matter how many epochs I run the training for. I have been struggling with this issue for many days.
Can someone suggest a solution?
Screenshot 2022-08-20 141130

Additional

No response

@sam-va sam-va added the question Further information is requested label Aug 20, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Aug 20, 2022

👋 Hello @sam-va, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@semihahishali
Copy link

It means that your labelled image number is insufficent or you have choosen the bigger Yolov5 weight files https://github.com/ultralytics/yolov5#pretrained-checkpoints, you should try smaller ones. If you have enough images, you should wait for a while and see these numbers are getting higher

@triple-Mu
Copy link
Contributor

It means that your labelled image number is insufficent or you have choosen the bigger Yolov5 weight files https://github.com/ultralytics/yolov5#pretrained-checkpoints, you should try smaller ones. If you have enough images, you should wait for a while and see these numbers are getting higher

Maybe that's not the case.
I test that when training model, the Conv model's bn has nan mean and var.
If I do something like print bn's mean and var, It runs correctly.

@glenn-jocher
Copy link
Member

glenn-jocher commented Aug 20, 2022

@sam-va @triple-Mu @semihahishali good news 😃! Your original zero-mAP issue may now be fixed ✅ in PR #9037.

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher
Copy link
Member

@sam-va @triple-Mu @semihahishali good news 😃! Your original issue may now be fixed ✅ in PR #9068 by @0zppd. @pourmand1376 tracked down the problem using git bisect and running each training 10x since the bug was not reproducible on every training, but showed up in maybe 1/3 of the trainings.

The original issue was that I had replaced torch.zeros() with torch.empty() on some ops like warmup and profiling to try to get some slight speed improvements, and once op in particular ran a torch.empty() tensor through the model when it was in .train() mode, leading the batchnorm layers to add those values to the tracked statistics. Since torch.empty is not initialized it can take on extremely high or low values, leading some batchnorm layers to randomly output NaN.

The PR has been extensively tested on 10x Colab trainings and all 10 came back good now:
Screenshot 2022-08-21 at 15 40 00

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@mr-mainak
Copy link

mr-mainak commented Sep 25, 2023

Having the same problem. I installed CUDA:11.3 with cuDNN:8.2.0 and Pytorch:1.12.1+cu113 on Ubuuntu:20.04. Any suggestion on what might be the reason.

@glenn-jocher
Copy link
Member

@mr-mainak it seems like you have the correct versions of CUDA, cuDNN, PyTorch, and Ubuntu installed. However, there could be other factors contributing to the issue you're experiencing.

To help troubleshoot, please provide more information about the specific problem you're facing. Did you encounter any error messages or unexpected behavior? It would be helpful if you could share the relevant portion of your code or any error logs.

In the meantime, you can try the following troubleshooting steps:

  1. Double-check that all dependencies and packages are installed correctly by following the installation instructions in the YOLOv5 repository.
  2. Ensure that you have the required dataset and labeled images in the correct format for training.
  3. Check if any specific modifications or customizations you made to the code might be causing the issue.
  4. Consider trying different YOLOv5 model architectures or pretrained weights to see if the issue persists.

By providing more details, we'll be able to assist you further in resolving the problem.

@mr-mainak
Copy link

@mr-mainak it seems like you have the correct versions of CUDA, cuDNN, PyTorch, and Ubuntu installed. However, there could be other factors contributing to the issue you're experiencing.

To help troubleshoot, please provide more information about the specific problem you're facing. Did you encounter any error messages or unexpected behavior? It would be helpful if you could share the relevant portion of your code or any error logs.

In the meantime, you can try the following troubleshooting steps:

  1. Double-check that all dependencies and packages are installed correctly by following the installation instructions in the YOLOv5 repository.
  2. Ensure that you have the required dataset and labeled images in the correct format for training.
  3. Check if any specific modifications or customizations you made to the code might be causing the issue.
  4. Consider trying different YOLOv5 model architectures or pretrained weights to see if the issue persists.

By providing more details, we'll be able to assist you further in resolving the problem.

@glenn-jocher I just upgraded pytorch to 2.0 version and everything is working fine. However still not sure why there was a problem with the previous pytorch version.

@glenn-jocher
Copy link
Member

@mr-mainak glad to hear that upgrading to PyTorch 2.0 resolved the issue for you! It's possible that there were compatibility or bug-related issues with the previous version of PyTorch that were causing the problem.

While it's difficult to pinpoint the exact cause without more information, it's not uncommon for different versions of libraries and frameworks to have compatibility issues or bug fixes that can affect the functionality of certain applications or models.

If you encounter any further issues or have any other questions, feel free to reach out. We're here to help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants