Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision and Recall always be 0. #9038

Closed
2 tasks done
mimi37 opened this issue Aug 19, 2022 · 9 comments · Fixed by #9068
Closed
2 tasks done

Precision and Recall always be 0. #9038

mimi37 opened this issue Aug 19, 2022 · 9 comments · Fixed by #9068
Labels
bug Something isn't working

Comments

@mimi37
Copy link

mimi37 commented Aug 19, 2022

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Training

Bug

I run the template from the website“python train.py --data coco128.yaml --weights yolov5s.pt --img 640”, however the precision and recall has always been 0. I am sure I pull the newest version which means the train.py has been modified.
截屏2022-08-19 23 34 41

Environment

exactly as same as the requirements.txt

Minimal Reproducible Example

no

Additional

no

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@mimi37 mimi37 added the bug Something isn't working label Aug 19, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Aug 19, 2022

👋 Hello @mimi37, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@li221199
Copy link

I have the same problem, the precision and recall has always been 0
what is the cause and how to solve it?
image

@elvecinodelquinto
Copy link

Same problem here. Seems it is not calculating anything for the validation subset.

@triple-Mu
Copy link
Contributor

It seems that bn has someting wrong when training model.
image

@robotaiguy
Copy link

I've been chasing this problem all day long. I did a git pull today because it said i was behind, and then I've had P, R, and mAP show either 0 or NaN, depending on the chart, no matter what I do. I've uninstalled my virtual environment, reinstalled, tried different versions of pytorch, etc.
Screenshot from 2022-08-20 02-38-17

@Th1nhNg0
Copy link

I got the same problem. I have train for 2 week, everything ok until yesterday

@triple-Mu
Copy link
Contributor

I got the same problem. I have train for 2 week, everything ok until yesterday

Now I rollback to 6.1.
It looks like more stable.

@glenn-jocher
Copy link
Member

@Th1nhNg0 @li221199 @mimi37 @robotwhispering @triple-Mu good news 😃! Your original issue may now be fixed ✅ in PR #9037. This PR resolves a zero-mAP bug.

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher
Copy link
Member

@Th1nhNg0 @triple-Mu @robotwhispering @mimi37 @li221199 @elvecinodelquinto good news 😃! Your original issue may now be fixed ✅ in PR #9068 by @0zppd. @pourmand1376 tracked down the problem using git bisect and running each training 10x since the bug was not reproducible on every training, but showed up in maybe 1/3 of the trainings.

The original issue was that I had replaced torch.zeros() with torch.empty() on some ops like warmup and profiling to try to get some slight speed improvements, and once op in particular ran a torch.empty() tensor through the model when it was in .train() mode, leading the batchnorm layers to add those values to the tracked statistics. Since torch.empty is not initialized it can take on extremely high or low values, leading some batchnorm layers to randomly output NaN.

The PR has been extensively tested on 10x Colab trainings and all 10 came back good now:
Screenshot 2022-08-21 at 15 40 00

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants