Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

labels4.append(labels) UnboundLocalError: local variable 'labels' referenced before assignment #548

Closed
Samjith888 opened this issue Oct 11, 2019 · 17 comments · Fixed by #1660
Labels
bug Something isn't working

Comments

@Samjith888
Copy link

Samjith888 commented Oct 11, 2019

I have replaced coco dataset with own datasets, which have only one class ('person'). While training, i got the following error.

`
(base) C:\Users\samjith.cp\Desktop\yolov3>python train.py --data coco.data --cfg cfg/yolov3.cfg
Namespace(accumulate=2, adam=False, arc='defaultpw', batch_size=32, bucket='', cache_images=False, cfg='cfg/yolov3.cfg', data='coco.data', device='', epochs=273, evolve=False, img_size=416, img_weights=False, multi_scale=False, name='', nosave=False, notest=False, prebias=False, rect=False, resume=False, transfer=False, var=None, weights='')
Using CPU

WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
Reading labels (357 found, 0 missing, 4 empty for 361 images): 100%|███████████████| 361/361 [00:00<00:00, 6489.34it/s]
Model Summary: 222 layers, 6.19491e+07 parameters, 6.19491e+07 gradients
Starting training for 273 epochs...

 Epoch   gpu_mem      GIoU       obj       cls     total   targets  img_size

Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
0%| | 0/12 [00:00<?, ?it/s]Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
Corrupt JPEG data: 2 extraneous bytes before marker 0xd9
Traceback (most recent call last):
File "train.py", line 426, in
train() # train normally
File "train.py", line 235, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\tqdm_tqdm.py", line 1005, in iter
for obj in iterable:
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in next
return self._process_data(data)
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 846, in _process_data
data.reraise()
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\torch_utils.py", line 369, in reraise
raise self.exc_type(msg)
UnboundLocalError: Caught UnboundLocalError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\samjith.cp\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\samjith.cp\Desktop\yolov3\utils\datasets.py", line 416, in getitem
img, labels = load_mosaic(self, index)
File "C:\Users\samjith.cp\Desktop\yolov3\utils\datasets.py", line 590, in load_mosaic
labels4.append(labels)
UnboundLocalError: local variable 'labels' referenced before assignment`

@Samjith888 Samjith888 added the bug Something isn't working label Oct 11, 2019
@Belinda-great
Copy link

I have the same bug:

 Epoch   gpu_mem      GIoU       obj       cls     total   targets  img_size
 0/272      3.7G      2.09      4.58         0      6.67       132       416:   0%|▏                                                        | 5/1990 [00:17<2:19:31,  4.22s/it]
 0/272      3.7G      2.06      4.58         0      6.64       120       416:   0%|▏                                                        | 5/1990 [00:17<2:19:31,  4.22s/it]
 0/272      3.7G      2.06      4.58         0      6.64       120       416:   0%|▏                                                        | 6/1990 [00:17<1:41:41,  3.08s/it]
 0/272      3.7G      2.05      4.77         0      6.82       171       416:   0%|▏                                                        | 6/1990 [00:18<1:41:41,  3.08s/it]
 0/272      3.7G      2.05      4.77         0      6.82       171       416:   0%|▏                                                        | 7/1990 [00:18<1:15:18,  2.28s/it]
 0/272      3.7G      2.04       4.7         0      6.73       109       416:   0%|▏                                                        | 7/1990 [00:18<1:15:18,  2.28s/it]
 0/272      3.7G      2.04       4.7         0      6.73       109       416:   0%|▏                                                          | 8/1990 [00:18<56:38,  1.71s/it]
 0/272      3.7G      2.03      4.67         0       6.7       131       416:   0%|▏                                                          | 8/1990 [00:24<56:38,  1.71s/it]
 0/272      3.7G      2.03      4.67         0       6.7       131       416:   0%|▎                                                        | 9/1990 [00:24<1:40:54,  3.06s/it]

Traceback (most recent call last):
File "train.py", line 433, in
train() # train normally
File "train.py", line 242, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "E:\soft\python3.6\lib\site-packages\tqdm_tqdm.py", line 1017, in iter
for obj in iterable:
File "E:\soft\python3.6\lib\site-packages\torch\utils\data\dataloader.py", line 568, in next
return self._process_next_batch(batch)
File "E:\soft\python3.6\lib\site-packages\torch\utils\data\dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
UnboundLocalError: Traceback (most recent call last):
File "E:\soft\python3.6\lib\site-packages\torch\utils\data_utils\worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "E:\soft\python3.6\lib\site-packages\torch\utils\data_utils\worker.py", line 99, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "Y:\train\yolov3-master\utils\datasets.py", line 416, in getitem
img, labels = load_mosaic(self, index)
File "Y:\train\yolov3-master\utils\datasets.py", line 590, in load_mosaic
labels4.append(labels)
UnboundLocalError: local variable 'labels' referenced before assignment

@Belinda-great
Copy link

I only one class

@rms0329
Copy link

rms0329 commented Oct 12, 2019

In my case, it was because the train dataset contained an empty label file.

@xiaotian3
Copy link

In my case, it was because the train dataset contained an empty label file.
how can you solve it?

@rms0329
Copy link

rms0329 commented Oct 13, 2019

Examining the whole label files, finding files of size 0, and deleting the label file and corresponding image. Then, create a txt file containing the image paths again.

for label_path in label_paths:
    if os.stat(label_path).st_size == 0:
        img_path = label_path.replace('labels', 'images').replace('.txt', '.jpg')
        os.remove(label_path)
        os.remove(img_path)

# create a txt file containing the image paths again.
if os.path.exists(txtfile):
    os.remove(txtfile)

with open(txtfile, 'w+') as f:
    for img_name in os.listdir(img_root):
        img_path = os.path.join(img_root, img_name)
        f.write(img_path + '\n')

@willsroberts
Copy link

@Belinda-great I'm seeing your same error (I have 24 classes instead of 1), but have no empty files, nor mismatch in files between labels & images. Did you have any luck resolving?

(py3) user$ python3 train.py --data data/coco.data --cfg cfg/yolov3.cfg
Namespace(accumulate=2, adam=False, arc='default', batch_size=32, bucket='', cache_images=False, cfg='cfg/yolov3.cfg', data='data/coco.data', device='', epochs=273, evolve=False, img_size=416, img_weights=False, multi_scale=False,
name='', nosave=False, notest=False, prebias=False, rect=False, resume=False, transfer=False, var=None, weights='')
Using CPU

Reading labels (4824 found, 16 missing, 0 empty for 4840 images): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4840/4840 [00:01<00:00, 3131.97it
/s]
Model Summary: 222 layers, 6.16476e+07 parameters, 6.16476e+07 gradients
Starting training for 273 epochs...

 Epoch   gpu_mem      GIoU       obj       cls     total   targets  img_size
 0/272        0G      2.04      1.35      10.3      13.7        77       416:   7%|███████▏                                                                                                      | 10/152 [36:21<8:31:24, 216.09s/

it]Traceback (most recent call last):
File "train.py", line 432, in
train() # train normally
File "train.py", line 235, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "//anaconda3/envs/py3/lib/python3.7/site-packages/tqdm/std.py", line 1081, in iter
for obj in iterable:
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 801, in next
return self._process_data(data)
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
data.reraise()
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
UnboundLocalError: Caught UnboundLocalError in DataLoader worker process 10.
Original Traceback (most recent call last):
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/Users/user.../yolov3/utils/datasets.py", line 416, in getitem
img, labels = load_mosaic(self, index)
File "/Users/user.../yolov3/utils/datasets.py", line 590, in load_mosaic
labels4.append(labels)
UnboundLocalError: local variable 'labels' referenced before assignment

@willsroberts
Copy link

@Belinda-great I'm seeing your same error (I have 24 classes instead of 1), but have no empty files, nor mismatch in files between labels & images. Did you have any luck resolving?

(py3) user$ python3 train.py --data data/coco.data --cfg cfg/yolov3.cfg
Namespace(accumulate=2, adam=False, arc='default', batch_size=32, bucket='', cache_images=False, cfg='cfg/yolov3.cfg', data='data/coco.data', device='', epochs=273, evolve=False, img_size=416, img_weights=False, multi_scale=False,
name='', nosave=False, notest=False, prebias=False, rect=False, resume=False, transfer=False, var=None, weights='')
Using CPU

Reading labels (4824 found, 16 missing, 0 empty for 4840 images): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4840/4840 [00:01<00:00, 3131.97it
/s]
Model Summary: 222 layers, 6.16476e+07 parameters, 6.16476e+07 gradients
Starting training for 273 epochs...

 Epoch   gpu_mem      GIoU       obj       cls     total   targets  img_size
 0/272        0G      2.04      1.35      10.3      13.7        77       416:   7%|███████▏                                                                                                      | 10/152 [36:21<8:31:24, 216.09s/

it]Traceback (most recent call last):
File "train.py", line 432, in
train() # train normally
File "train.py", line 235, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "//anaconda3/envs/py3/lib/python3.7/site-packages/tqdm/std.py", line 1081, in iter
for obj in iterable:
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 801, in next
return self._process_data(data)
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
data.reraise()
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
UnboundLocalError: Caught UnboundLocalError in DataLoader worker process 10.
Original Traceback (most recent call last):
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "//anaconda3/envs/py3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/Users/user.../yolov3/utils/datasets.py", line 416, in getitem
img, labels = load_mosaic(self, index)
File "/Users/user.../yolov3/utils/datasets.py", line 590, in load_mosaic
labels4.append(labels)
UnboundLocalError: local variable 'labels' referenced before assignment

I fixed this error after I uncommented a line in datasets.py, (line 330) to discover the problematic files and deleted them. I had sufficient samples that this wasn't a problem, but for some datasets that might not be the case. I also could not deduce from close inspection what the difference was between these labels and other labels. Ideally, if this type of data error will prevent the model training, it should exit prior to starting the process.
Note: after deleting your images and label files, also update your list of files you point to in your version of coco.data
nm += 1 print('missing labels for image %s' % self.img_files[i]) # file missing continue

@mozpp
Copy link

mozpp commented Nov 7, 2019

@glenn-jocher, I just add a tab in line, maybe fix?

labels4.append(labels)

    if os.path.isfile(label_path):
        x = self.labels[index]
        if x is None:  # labels not preloaded
            with open(label_path, 'r') as f:
                x = np.array([x.split() for x in f.read().splitlines()], dtype=np.float32)

        if x.size > 0:
            # Normalized xywh to pixel xyxy format
            labels = x.copy()
            labels[:, 1] = w * (x[:, 1] - x[:, 3] / 2) + padw
            labels[:, 2] = h * (x[:, 2] - x[:, 4] / 2) + padh
            labels[:, 3] = w * (x[:, 1] + x[:, 3] / 2) + padw
            labels[:, 4] = h * (x[:, 2] + x[:, 4] / 2) + padh

            labels4.append(labels) # add a tab to fix issue #548
if len(labels4):
    labels4 = np.concatenate(labels4, 0)

@glenn-jocher
Copy link
Member

@mozpp @willsroberts @rms0329 @xiaotian3 @Belinda-great @Samjith888 the latest commit should fix this: aae39ca

The error was caused because some images in your custom dataset lacked labels, so this predefines an empty labels array for all images, which is replaced by actual labels if they are present. Can you git pull and try again?

@bchugg
Copy link

bchugg commented Nov 12, 2019

@glenn-jocher previously had the same error as those above. After pulling (so that I'm to date as of this post) and trying again, I run into the following error:

File "/oak/stanford/groups/deho/benny/cafo/yolov3/utils/datasets.py", line 593, in load_mosaic
    labels4 = np.concatenate(labels4, 0)
  File "<__array_function__ internals>", line 6, in concatenate
ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 3 has 2 dimension(s)

I'm assuming this had something to do with the fix; the code runs fine if I remove the empty label files.

@glenn-jocher
Copy link
Member

@bchugg thanks for the feedback. This means it's trying to stack the np.arrays with the [] repopulated arrays. We want to make these 0xn np arrays, so I've updated the code to this now. Can you try again? Thanks!

yolov3/utils/datasets.py

Lines 582 to 592 in 470ef6b

if x.size > 0:
# Normalized xywh to pixel xyxy format
labels = x.copy()
labels[:, 1] = w * (x[:, 1] - x[:, 3] / 2) + padw
labels[:, 2] = h * (x[:, 2] - x[:, 4] / 2) + padh
labels[:, 3] = w * (x[:, 1] + x[:, 3] / 2) + padw
labels[:, 4] = h * (x[:, 2] + x[:, 4] / 2) + padh
else:
labels = np.zeros((0, 5), dtype=np.float32)
labels4.append(labels)

@bchugg
Copy link

bchugg commented Nov 13, 2019

@glenn-jocher all good on my end now! Cheers :).

@glenn-jocher
Copy link
Member

Great! I'll close this issue for now as the original issue appears to have been resolved, and/or no activity has been seen for some time. Feel free to comment if this is not the case.

@henbucuoshanghai
Copy link

yolov5/utils/general.py", line 75, in check_git_status
print(s)
UnboundLocalError: local variable 's' referenced before assignment

@henbucuoshanghai
Copy link

the train set must not have a image that has no object in it?????

@glenn-jocher
Copy link
Member

@henbucuoshanghai thanks for the bug report. This is related to a recent PR ultralytics/yolov5#1916 (unrelated to your dataset). I will take a look.

@glenn-jocher glenn-jocher reopened this Jan 13, 2021
@glenn-jocher glenn-jocher linked a pull request Jan 13, 2021 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 13, 2021

@henbucuoshanghai we've identified the problem and created and merged a bug fix PR #1660 for this. Please git pull to receive this update and let us know if you spot any other issues!

Thank you for your contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants