Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: DataLoader worker #1908

Closed
dcboy opened this issue Jan 12, 2021 · 7 comments
Closed

RuntimeError: DataLoader worker #1908

dcboy opened this issue Jan 12, 2021 · 7 comments
Labels
bug Something isn't working Stale

Comments

@dcboy
Copy link

dcboy commented Jan 12, 2021

Before submitting a bug report, please be aware that your issue must be reproducible with all of the following, otherwise it is non-actionable, and we can not help you:

If this is a custom dataset/training question you must include your train*.jpg, test*.jpg and results.png figures, or we can not help you. You can generate these with utils.plot_results().

🐛 Bug

Traceback (most recent call last):
  File "train.py", line 519, in <module>
    train(hyp, opt, device, tb_writer, wandb)
  File "train.py", line 292, in train
    pred = model(imgs)  # forward
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/yolo.py", line 119, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/dcboy/work/project/yolov5/models/yolo.py", line 135, in forward_once
    x = m(x)  # run
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/common.py", line 86, in forward
    return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward
    input = module(input)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/common.py", line 52, in forward
    return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/common.py", line 36, in forward
    return self.act(self.bn(self.conv(x)))
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 394, in forward
    return F.silu(input, inplace=self.inplace)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1741, in silu
    return torch._C._nn.silu(input)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 10497) is killed by signal: Killed.

To Reproduce (REQUIRED)

Input:

python train.py --name 6970399920439 --img 640 --batch 16 --epochs 300 --data ../6970399920439/dataset.yaml --cfg ../6970399920439/model.yaml --weight yolov5l.pt

Output:

Traceback (most recent call last):
  File "train.py", line 519, in <module>
    train(hyp, opt, device, tb_writer, wandb)
  File "train.py", line 292, in train
    pred = model(imgs)  # forward
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/yolo.py", line 119, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/dcboy/work/project/yolov5/models/yolo.py", line 135, in forward_once
    x = m(x)  # run
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/common.py", line 86, in forward
    return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward
    input = module(input)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/common.py", line 52, in forward
    return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/work/project/yolov5/models/common.py", line 36, in forward
    return self.act(self.bn(self.conv(x)))
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 394, in forward
    return F.silu(input, inplace=self.inplace)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1741, in silu
    return torch._C._nn.silu(input)
  File "/home/dcboy/.local/lib/python3.6/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 10497) is killed by signal: Killed.

Expected behavior

A clear and concise description of what you expected to happen.

Environment

If applicable, add screenshots to help explain your problem.

  • OS: [e.g. Ubuntu]
  • GPU [TX2]

Additional context

Add any other context about the problem here.

@dcboy dcboy added the bug Something isn't working label Jan 12, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jan 12, 2021

👋 Hello @dcboy, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@dcboy this can occur if your system resources are overwhelmed. You should simply restart your machine and retry, and possibly reduce --workers when training.

@dcboy
Copy link
Author

dcboy commented Feb 2, 2021

@dcboy this can occur if your system resources are overwhelmed. You should simply restart your machine and retry, and possibly reduce --workers when training.

i have set workers as 1 , but also have issues

@dcboy
Copy link
Author

dcboy commented Feb 3, 2021

`
Analyzing anchors... anchors/target = 5.68, Best Possible Recall (BPR) = 1.0000
Image sizes 640 train, 640 test
Using 6 dataloader workers
Logging results to runs/train/69211685092565
Starting training for 300 epochs...

 Epoch   gpu_mem       box       obj       cls     total   targets  img_size

0%| | 0/73 [00:35<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 519, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 292, in train
pred = model(imgs) # forward
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/my/Work/project/yolov5/models/yolo.py", line 119, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/home/my/Work/project/yolov5/models/yolo.py", line 135, in forward_once
x = m(x) # run
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/my/Work/project/yolov5/models/common.py", line 86, in forward
return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/my/Work/project/yolov5/models/common.py", line 52, in forward
return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/my/Work/project/yolov5/models/common.py", line 36, in forward
return self.act(self.bn(self.conv(x)))
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "/home/my/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
File "/home/my/.local/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 30853) is killed by signal: Killed.
`

=======
pytorch == 1.7.1
Linux desktop 4.9.201-tegra #1 SMP PREEMPT Fri Jan 15 14:54:23 PST 2021 aarch64 aarch64 aarch64 GNU/Linux
CUDA 10.2
jetson TX2

@glenn-jocher
Copy link
Member

@dcboy killed workers can be a symptom of hardware strains (i.e. out of memory, too many threads, etc.)

@github-actions
Copy link
Contributor

github-actions bot commented Mar 6, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@youngfreeFJS
Copy link

set --workers 0 is working for me (macbook pro m1)

@dcboy this can occur if your system resources are overwhelmed. You should simply restart your machine and retry, and possibly reduce --workers when training.

i have set workers as 1 , but also have issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

3 participants