Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: unsupported operand type(s) for /: 'NoneType' and 'float' #243

Closed
andriy-onufriyenko opened this issue Apr 20, 2022 · 14 comments · Fixed by #249
Closed

TypeError: unsupported operand type(s) for /: 'NoneType' and 'float' #243

andriy-onufriyenko opened this issue Apr 20, 2022 · 14 comments · Fixed by #249
Assignees
Labels

Comments

@andriy-onufriyenko
Copy link

I want to train CFLOW model on custom dataset.

config.yaml

dataset:
  name: Concrete_Crack #options: [mvtec, btech, folder, concrete_crack]
  format: folder # mvtec
  path: ./datasets/Concrete_Crack/ # ./datasets/MVTec
  normal_dir: 'train/Negative'
  abnormal_dir: 'test/Positive'
  normal_test_dir: 'test/Negative'
  task: segmentation
  mask: ./datasets/Concrete_Crack/ground_truth/
  extensions: '.jpg'
  split_ratio: 0.1
  seed: 0
#  category: bottle
  image_size: 227
  train_batch_size: 8 # 16
  test_batch_size: 8 # 16
  inference_batch_size: 8 # 16
  fiber_batch_size: 64
  num_workers: 8
  transform_config:
    train: null
    val: null
  create_validation_set: false

My dataset have the same structure like MVTec
image

To Reproduce
python3 tools/train.py --model_config_path anomalib/models/cflow/config.yaml

When I start the training, I get an error:

File "/home/Projects/Anomalib/anomalib/anomalib/data/folder.py", line 282, in __getitem__
    mask = cv2.imread(mask_path, flags=0) / 255.0
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'
@julien-blanchon
Copy link
Contributor

cv2.imread(mask_path, flags=0) may return None. Try using only absolute path in your config file (and in particular for mask path)

@andriy-onufriyenko
Copy link
Author

Try using only absolute path in your config file (and in particular for mask path)

Got:

File "/home/Projects/Anomalib/anomalib/anomalib/data/folder.py", line 80, in _prepare_files_labels
    raise RuntimeError(f"Found 0 {path_type} images in {path}")
RuntimeError: Found 0 normal images in /datasets/Concrete_Crack/train/Negative

@djdameln
Copy link
Contributor

@andriy-onufriyenko Did you specify the image extension correctly? I can replicate your error by passing extensions: '.jpg' in the config and having .png images in the dataset folder.

@djdameln djdameln added the Data label Apr 21, 2022
@andriy-onufriyenko
Copy link
Author

Did you specify the image extension correctly?

image

@samet-akcay
Copy link
Contributor

@andriy-onufriyenko, folder format expects the mask names to be the same as image names. You have the Mvtec format, which adds _mask as a suffix. That's why FolderDataset cannot find the corresponding mask images for the input images.

@andriy-onufriyenko
Copy link
Author

@samet-akcay

Renamed the files. Changed the path. Nothing helped.

image

dataset:
  name: Concrete_Crack #options: [mvtec, btech, folder, concrete_crack]
  format: folder # mvtec
  path: ./datasets/Concrete_Crack/ # ./datasets/MVTec
  normal_dir: 'train/Negative'
  abnormal_dir: 'test/Positive'
  normal_test_dir: 'test/Negative'
  task: segmentation
  mask: /home/andrey/Projects/Anomalib/anomalib/datasets/Concrete_Crack/ground_truth/Positive
  extensions: '.jpg'
  split_ratio: 0.1
  seed: 0
  image_size: 227
  train_batch_size: 8 # 16
  test_batch_size: 8 # 16
  inference_batch_size: 8 # 16
  fiber_batch_size: 64
  num_workers: 8
  transform_config:
    train: null
    val: null
  create_validation_set: false
Epoch 0:  68%|████████████████
[ WARN:0@167.940] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_(''): can't open/read file: check file path/integrity  | 23/34 [02:42<01:17,  7.07s/it, v_num=13]
[ WARN:0@167.940] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_(''): can't open/read file: check file path/integrity
[ WARN:0@167.940] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_(''): can't open/read file: check file path/integrity
Traceback (most recent call last):
  File "tools/train.py", line 71, in <module>
    train()
  File "tools/train.py", line 61, in train
    trainer.fit(model=model, datamodule=datamodule)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 768, in fit
    self._call_and_handle_interrupt(
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1234, in _run
    results = self._run_stage()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1321, in _run_stage
    return self._run_train()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1351, in _run_train
    self.fit_loop.run()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 205, in run
    self.on_advance_end()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 255, in on_advance_end
    self._run_validation()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 309, in _run_validation
    self.val_loop.run()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 153, in advance
    dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 111, in advance
    batch = next(data_fetcher)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py", line 184, in __next__
    return self.fetching_function()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py", line 259, in fetching_function
    self._fetch_next_batch(self.dataloader_iter)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py", line 273, in _fetch_next_batch
    batch = next(iterator)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
    data = self._next_data()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
    return self._process_data(data)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
    data.reraise()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/_utils.py", line 457, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/andrey/Projects/Anomalib/anomalib/anomalib/data/folder.py", line 282, in __getitem__
    mask = cv2.imread(mask_path, flags=0) / 255.0
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

@samet-akcay
Copy link
Contributor

@andriy-onufriyenko, I've created a PR #249 to address your issue. You could use this branch to test it for now.

I've used the following configuration, and successfully trained the model.

dataset:
  name: Concrete_Crack #options: [mvtec, btech, folder, concrete_crack]
  format: folder # mvtec
  path: ./datasets/Concrete_Crack/ # ./datasets/MVTec
  normal_dir: "train/Negative"
  abnormal_dir: "test/Positive"
  normal_test_dir: "test/Negative"
  task: segmentation
  mask: ./datasets/Concrete_Crack/ground_truth/Positive
  extensions: ".jpg"
  split_ratio: 0.1
  seed: 0
  image_size: 227
  train_batch_size: 8 # 16
  test_batch_size: 8 # 16
  inference_batch_size: 8 # 16
  fiber_batch_size: 64
  num_workers: 8
  transform_config:
    train: null
    val: null
  create_validation_set: false
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16

@samet-akcay samet-akcay self-assigned this Apr 21, 2022
@haobo827
Copy link

@samet-akcay

Renamed the files. Changed the path. Nothing helped.

image

dataset:
  name: Concrete_Crack #options: [mvtec, btech, folder, concrete_crack]
  format: folder # mvtec
  path: ./datasets/Concrete_Crack/ # ./datasets/MVTec
  normal_dir: 'train/Negative'
  abnormal_dir: 'test/Positive'
  normal_test_dir: 'test/Negative'
  task: segmentation
  mask: /home/andrey/Projects/Anomalib/anomalib/datasets/Concrete_Crack/ground_truth/Positive
  extensions: '.jpg'
  split_ratio: 0.1
  seed: 0
  image_size: 227
  train_batch_size: 8 # 16
  test_batch_size: 8 # 16
  inference_batch_size: 8 # 16
  fiber_batch_size: 64
  num_workers: 8
  transform_config:
    train: null
    val: null
  create_validation_set: false
Epoch 0:  68%|████████████████
[ WARN:0@167.940] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_(''): can't open/read file: check file path/integrity  | 23/34 [02:42<01:17,  7.07s/it, v_num=13]
[ WARN:0@167.940] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_(''): can't open/read file: check file path/integrity
[ WARN:0@167.940] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_(''): can't open/read file: check file path/integrity
Traceback (most recent call last):
  File "tools/train.py", line 71, in <module>
    train()
  File "tools/train.py", line 61, in train
    trainer.fit(model=model, datamodule=datamodule)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 768, in fit
    self._call_and_handle_interrupt(
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1234, in _run
    results = self._run_stage()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1321, in _run_stage
    return self._run_train()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1351, in _run_train
    self.fit_loop.run()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 205, in run
    self.on_advance_end()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 255, in on_advance_end
    self._run_validation()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 309, in _run_validation
    self.val_loop.run()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 153, in advance
    dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 111, in advance
    batch = next(data_fetcher)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py", line 184, in __next__
    return self.fetching_function()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py", line 259, in fetching_function
    self._fetch_next_batch(self.dataloader_iter)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/fetching.py", line 273, in _fetch_next_batch
    batch = next(iterator)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
    data = self._next_data()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
    return self._process_data(data)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
    data.reraise()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/_utils.py", line 457, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/andrey/Projects/Anomalib/anomalib/anomalib/data/folder.py", line 282, in __getitem__
    mask = cv2.imread(mask_path, flags=0) / 255.0
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

The same problem with you.
Did you solved it?
About:
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

@samet-akcay
Copy link
Contributor

@andriy-onufriyenko, I've created a PR #249 to address your issue. You could use this branch to test it for now.

I've used the following configuration, and successfully trained the model.

dataset:
  name: Concrete_Crack #options: [mvtec, btech, folder, concrete_crack]
  format: folder # mvtec
  path: ./datasets/Concrete_Crack/ # ./datasets/MVTec
  normal_dir: "train/Negative"
  abnormal_dir: "test/Positive"
  normal_test_dir: "test/Negative"
  task: segmentation
  mask: ./datasets/Concrete_Crack/ground_truth/Positive
  extensions: ".jpg"
  split_ratio: 0.1
  seed: 0
  image_size: 227
  train_batch_size: 8 # 16
  test_batch_size: 8 # 16
  inference_batch_size: 8 # 16
  fiber_batch_size: 64
  num_workers: 8
  transform_config:
    train: null
    val: null
  create_validation_set: false
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16

@haobo827 if you use the above branch until it's merged, it should work.

@andriy-onufriyenko
Copy link
Author

You could use this branch to test it for now.

I've used the following configuration, and successfully trained the model.

@samet-akcay

Epoch 0:  69%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉                                                      | 50/72 [03:25<01:30,  4.11s/it]
/home/andrey/Projects/Anomalib/anomalib/anomalib/models/cflow/anomaly_map.py:54: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  test_norm = torch.tensor(distribution[layer_idx], dtype=torch.double)  # pylint: disable=not-callable
Epoch 0: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 72/72 [03:32<00:00,  2.95s/it]Traceback (most recent call last):█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22/22 [00:07<00:00,  2.99it/s]
  File "tools/train.py", line 83, in <module>
    train()
  File "tools/train.py", line 73, in train
    trainer.fit(model=model, datamodule=datamodule)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 768, in fit
    self._call_and_handle_interrupt(
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1234, in _run
    results = self._run_stage()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1321, in _run_stage
    return self._run_train()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1351, in _run_train
    self.fit_loop.run()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 205, in run
    self.on_advance_end()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 255, in on_advance_end
    self._run_validation()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 309, in _run_validation
    self.val_loop.run()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 211, in run
    output = self.on_run_end()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 187, in on_run_end
    self._evaluation_epoch_end(self._outputs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 309, in _evaluation_epoch_end
    self.trainer._call_lightning_module_hook("validation_epoch_end", output_or_outputs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1593, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/anomalib/anomalib/models/components/base/anomaly_module.py", line 132, in validation_epoch_end
    self._compute_adaptive_threshold(outputs)
  File "/home/andrey/Projects/Anomalib/anomalib/anomalib/models/components/base/anomaly_module.py", line 147, in _compute_adaptive_threshold
    self.image_threshold.compute()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torchmetrics/metric.py", line 440, in wrapped_func
    value = compute(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/anomalib/anomalib/utils/metrics/adaptive_threshold.py", line 38, in compute
    precision, recall, thresholds = self.precision_recall_curve.compute()
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torchmetrics/metric.py", line 440, in wrapped_func
    value = compute(*args, **kwargs)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torchmetrics/classification/precision_recall_curve.py", line 143, in compute
    return _precision_recall_curve_compute(preds, target, self.num_classes, self.pos_label)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torchmetrics/functional/classification/precision_recall_curve.py", line 259, in _precision_recall_curve_compute
    return _precision_recall_curve_compute_single_class(preds, target, pos_label, sample_weights)
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torchmetrics/functional/classification/precision_recall_curve.py", line 139, in _precision_recall_curve_compute_single_class
    fps, tps, thresholds = _binary_clf_curve(
  File "/home/andrey/Projects/Anomalib/venv/lib/python3.8/site-packages/torchmetrics/functional/classification/precision_recall_curve.py", line 39, in _binary_clf_curve
    target = target[desc_score_indices]
IndexError: index 165 is out of bounds for dimension 0 with size 85
Epoch 0: 100%|██████████| 72/72 [03:33<00:00,  2.97s/it]

@samet-akcay
Copy link
Contributor

@andriy-onufriyenko, unfortunately, I'm unable to reproduce this issue. Could you perhaps try the mvtec format?

Anyone else having this issue? Any tips to reproduce?

@samet-akcay
Copy link
Contributor

@andriy-onufriyenko, I'm closing this issue since [TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'](https://github.com/openvinotoolkit/anomalib/issues/243#) has been fixed.

The issue you're having now is due to setting test_batch_size: 8. For some reason, multiple test batch size is not supported. If you set it to test_batch_size: 1, it would work. You could refer to #268.

@ifarady
Copy link

ifarady commented Apr 25, 2022

@andriy-onufriyenko, I'm closing this issue since [TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'](https://github.com/openvinotoolkit/anomalib/issues/243#) has been fixed.

The issue you're having now is due to setting test_batch_size: 8. For some reason, multiple test batch size is not supported. If you set it to test_batch_size: 1, it would work. You could refer to #268.

Hi, i 'm dealing with the same error after i change test_batch_size: 8 to test_batch_size: 1. Did i miss something here?

dataset:
name: abn
format: folder
path: ./datasets/abn/cable
normal_dir: 'train/good' # name of the folder containing normal images.
abnormal_dir: 'test/bent_wire' # name of the folder containing abnormal images.
normal_test_dir: 'test/good'
task: segmentation # classification or segmentation
mask: ./datasets/abn/cable/ground_truth/ #optional
extensions: '.png'
split_ratio: 0.1 # ratio of the normal images that will be used to create a test split
seed: 0
image_size: 256
train_batch_size: 8
test_batch_size: 1
num_workers: 8
transform_config:
train: null
val: null
create_validation_set: false
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16

Error:

File "/home/lin70935/anaconda3/envs/anomalib_env/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/lin70935/Data/pypro/anomalib/anomalib/data/folder.py", line 283, in getitem
mask = cv2.imread(mask_path, flags=0) / 255.0
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

Epoch 0: 28%|██▊ | 28/99 [00:04<00:10, 6.62it/s, loss=nan]

@samet-akcay
Copy link
Contributor

@ifarady,

File "/home/lin70935/anaconda3/envs/anomalib_env/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/lin70935/Data/pypro/anomalib/anomalib/data/folder.py", line 283, in getitem
mask = cv2.imread(mask_path, flags=0) / 255.0
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

This error occurs when the masks cannot be read properly. Make sure you provided the right paths for the mask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants