Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Image Not Found #2130

Closed
mitunkantipaul opened this issue Feb 4, 2021 · 6 comments
Closed

AssertionError: Image Not Found #2130

mitunkantipaul opened this issue Feb 4, 2021 · 6 comments
Labels

Comments

@mitunkantipaul
Copy link

Hi,
I am trying to train on colab using wheat data.
all image formate is jpg but giving following error

github: up to date with https://github.com/ultralytics/yolov5
YOLOv5 v4.0-63-g73a0669 torch 1.7.0+cu101 CUDA:0 (Tesla T4, 15079.75MB)

Namespace(adam=False, batch_size=8, bucket='', cache_images=False, cfg='models/yolov5s.yaml', data='wheat.yaml', device='', epochs=3, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[1024, 1024], local_rank=-1, log_artifacts=False, log_imgs=16, multi_scale=False, name='wd', noautoanchor=False, nosave=False, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs/train/wd4', single_cls=False, sync_bn=False, total_batch_size=8, weights='yolov5s.pt', workers=8, world_size=1)
wandb: Install Weights & Biases for YOLOv5 logging with 'pip install wandb' (recommended)
Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/
2021-02-04 17:39:45.923389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
hyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0

             from  n    params  module                                  arguments                     

0 -1 1 3520 models.common.Focus [3, 32, 3]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 156928 models.common.C3 [128, 128, 3]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]]
9 -1 1 1182720 models.common.C3 [512, 512, 1, False]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 283 layers, 7063542 parameters, 7063542 gradients, 16.4 GFLOPS

Transferred 354/362 items from yolov5s.pt
Scaled weight_decay = 0.0005
Optimizer groups: 62 .bias, 62 conv.weight, 59 other
train: Scanning 'wheat_data/labels/train.cache' for images and labels... 3035 found, 0 missing, 0 empty, 0 corrupted: 100% 3035/3035 [00:00<00:00, 30972536.84it/s]
val: Scanning 'wheat_data/labels/validation.cache' for images and labels... 338 found, 0 missing, 0 empty, 0 corrupted: 100% 338/338 [00:00<00:00, 2522552.94it/s]
Plotting labels...

autoanchor: Analyzing anchors... anchors/target = 5.72, Best Possible Recall (BPR) = 0.9992
Image sizes 1024 train, 1024 test
Using 2 dataloader workers
Logging results to runs/train/wd4
Starting training for 3 epochs...

 Epoch   gpu_mem       box       obj       cls     total   targets  img_size

0% 0/380 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 522, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 264, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1104, in iter
for obj in iterable:
File "/content/yolov5/utils/datasets.py", line 103, in iter
yield next(self.iterator)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/yolov5/utils/datasets.py", line 504, in getitem
img, labels = load_mosaic(self, index)
File "/content/yolov5/utils/datasets.py", line 659, in load_mosaic
img, _, (h, w) = load_image(self, index)
File "/content/yolov5/utils/datasets.py", line 614, in load_image
assert img is not None, 'Image Not Found ' + path
AssertionError: Image Not Found wheat_data\images\train\00333207f.jpg

0% 0/380 [00:00<?, ?it/s]

@github-actions
Copy link
Contributor

github-actions bot commented Feb 4, 2021

👋 Hello @mitunkantipaul, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@mitunkantipaul network connection problems may cause some remote images to not be found. You should always train with a local dataset, never from 'mounted' google drives or other remote sources.

@GioFic95
Copy link

Hi, I have the same problem on a Google Cloud VM, with local storage.
I noticed that the path of the not found image contains backslashes instead of slashes, so I implemented this workaround:
after each occurrence of path = self.img_files[index] or path = self.files[self.count] in datasets.py, I added the following piece of code:

if "\\" in path:
    path = path.replace("\\", "/")
    path = path.replace("\t", "/t")

I know this isn't a solution, but since I have no idea why the problem arises, I just made it work.

@glenn-jocher
Copy link
Member

@GioFic95 that's odd. We use pathlib as much as possible in the dataloader to avoid path issues among different OS's. Backslashes are used in Windows and forward slashes in ubuntu and macos. If this is issue is reproducible please provide a link to a colab notebook we can run to try to isolate the issue. Thanks!

@GioFic95
Copy link

@glenn-jocher I didn't have this issue in a Colab notebook, but on a Debian Google cloud VM instance with the configuration described in this file, by running this command:

python train.py --img 600 --batch 64 --epochs 300 --data shape_ds.yaml --weights weights/yolov5s.pt --wandb shape_ds

both using Chrome Remote Desktop and the browser-based SSH console.

At this link on GitHub you can find the dataset configuration file I used and the file datasets.py with my workaround implemented, while I uploaded my dataset on Google Drive.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants