Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

losses are nan #4084

Closed
PoYuHan opened this issue Jul 20, 2021 · 16 comments
Closed

losses are nan #4084

PoYuHan opened this issue Jul 20, 2021 · 16 comments
Labels
bug Something isn't working Stale

Comments

@PoYuHan
Copy link

PoYuHan commented Jul 20, 2021

Hi, recently I trained my own data with new train.py, all losses I got were nan and the confusion matrix shows every images were predicted as FN, I also tried on coco128 datasets and thing still goes wrong, but another PC training by CPU seems good, and the older version train.py training in GPU works good to.

OS: Windows 10
torch:1.9.0-cuda11.1

And sorry for my broken English...

@PoYuHan PoYuHan added the bug Something isn't working label Jul 20, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jul 20, 2021

👋 Hello @PoYuHan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 20, 2021

@PoYuHan sometimes Windows and/or Anaconda environments suffer from CUDA problems.

It appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@PoYuHan
Copy link
Author

PoYuHan commented Jul 20, 2021

I used anaconda to create a python3.8 env and follow all steps in install section of Quick Start Examples just 3 hours ago, but still not work.

@glenn-jocher
Copy link
Member

@PoYuHan we recommend pip envs and pip installs due to problems like this with Anaconda.

@PoYuHan
Copy link
Author

PoYuHan commented Jul 21, 2021

I've tried to use pipenv and install all packages with requirements.txt, and losses calculated correctly but it was using CPU, so I manually reinstall pytorch, torchvision, and torchaudio the cuda 11.1 version follows by the Pytorch official website, after that it used GPU but all losses turned to nan again.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 21, 2021

@PoYuHan then your data is causing training instabilities and you should check it for errors.

If you have a reproducible issue on a common dataset like COCO128 please raise a new issue. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible that still produces the same problem
  • Complete – Provide all parts someone else needs to reproduce your problem in the question itself
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

  • Current – Verify that your code is up-to-date with current GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been resolved by previous commits.
  • Unmodified – Your problem must be reproducible without any modifications to the codebase in this repository. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@PoYuHan
Copy link
Author

PoYuHan commented Jul 22, 2021

@glenn-jocher I've tried uninstall anaconda and reinstall python then use pipenv to train, and everything works great!! Thanks!!

@glenn-jocher
Copy link
Member

@PoYuHan oh good! So the problem at the end was anaconda then?

Maybe we should have a warning to avoid anaconda installs.

@PoYuHan
Copy link
Author

PoYuHan commented Jul 22, 2021

@glenn-jocher I think I was wrong... I just opened my pipenv then do the training and error still appears, I tried rebuild the virtrual env and it was not helping. I've also tried on another pc but the same error happened.

@PoYuHan
Copy link
Author

PoYuHan commented Jul 23, 2021

@glenn-jocher I tried installed cuda10.2 and pytorch=1.9.0+cuda102 then the problem solved!! I also tried anaconda and pipenv, both of them worked perfectly on cuda10.2 version. Thank you so much for your help!!!!!!!!

@github-actions
Copy link
Contributor

github-actions bot commented Aug 23, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@mengmeng0406
Copy link

@PoYuHan
I encountered the same problem. How did you solve it? Just change CUDA version to 10.2? The CUDA version of my virtual environment is 10.2, but the CUDA version of the system is 11 x.

@shubhambagwari
Copy link

I forked the latest version of Yv5, Still, I am facing this error.
Earlier I trained this model with 2000 images but faced the same issue.

Roboflow has been used to label the custom dataset.
I have read the earlier discussions but not working.

207236797-e2cf2e4d-67ec-4a52-8fdb-ff048cf0caa3

@glenn-jocher
Copy link
Member

@shubhambagwari train 300 epochs.

@rtoddsullivan
Copy link

I'm running on wsl2 using python env configured with requirements.txt. I find that when I define a large batch size, train_loss and test_loss are nan.
For me a batch size of 16 works but 128 shows nan.

@glenn-jocher
Copy link
Member

@rtoddsullivan This issue might be related to numerical instabilities when using large batch sizes. Using mixed precision training (torch.backends.cudnn.benchmark = True) can sometimes alleviate this problem without sacrificing performance.

You can also try reducing the learning rate and adjusting the network architecture, if possible, to stabilize training with larger batch sizes.

Thank you for your contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

5 participants