Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broken pipe #1859

Closed
ryan994 opened this issue Jan 7, 2021 · 7 comments
Closed

Broken pipe #1859

ryan994 opened this issue Jan 7, 2021 · 7 comments
Labels
bug Something isn't working

Comments

@ryan994
Copy link

ryan994 commented Jan 7, 2021

Hello, when I try to run v4.0, I meet a issue, maybe a bug?

🐛 Bug

BrokenPipeError: [Errno 32] Broken pipe

To Reproduce

I did not change anything, the commond I run is :
python train.py --batch 20 --epochs 300 --data ./data/coco128.yaml --weights ./weights/yolov5s.pt --name test123

Output:

Traceback (most recent call last):
File "train.py", line 519, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 202, in train
rank=-1, world_size=opt.world_size, workers=opt.workers, pad=0.5)[0]
File "D:\yolov5_v4.0\utils\datasets.py", line 83, in create_dataloader
collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn)
File "D:\yolov5_v4.0\utils\datasets.py", line 96, in init
self.iterator = super().iter()
File "D:\Anaconda3\envs\pytorch17\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "D:\Anaconda3\envs\pytorch17\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\Anaconda3\envs\pytorch17\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

Environment

  • OS: win10
  • GPU Using torch 1.7.1+cu101 CUDA:0 (GeForce GTX 1070, 8192.0MB)
  • Memory: 16

Additional context

I google this issue, it seems about thread, when I change number of workers to 6 or below, it worked successfully. Is my computer issue or a bug? Anyone has idea? Thanks!!!

@ryan994 ryan994 added the bug Something isn't working label Jan 7, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jan 7, 2021

👋 Hello @ryan994, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@ryan994 yes this may be associated with multiple dataloader workers. I don't think there's any relation to the recent release. If you can reproduce this in a Colab notebook please advise, otherwise your solution should work locally.

@ryan994
Copy link
Author

ryan994 commented Jan 7, 2021

@glenn-jocher I run v3.0 before, It works prefectly with 8 workers, however, it does not work in v4.0 at same env.

@glenn-jocher
Copy link
Member

@ryan994 if you can supply a reproducible example in a common environment (one of the 4 above), we can take a look.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 7, 2021

@ryan994 also just an FYI we are training several models at the moment with v4.0 release with default worker counts with no known issues other than #1852, which is unrelated.

@ryan994
Copy link
Author

ryan994 commented Jan 7, 2021

@glenn-jocher ok, I will try to run it on other pc, and check if my pc's problem.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 7, 2021

@ryan994 ok! docker image is a good solution also for local environment issues:
https://hub.docker.com/r/ultralytics/yolov5
https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants