Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving the result for custom dataset #285

Closed
ZeynepRuveyda opened this issue Apr 26, 2022 · 18 comments
Closed

Improving the result for custom dataset #285

ZeynepRuveyda opened this issue Apr 26, 2022 · 18 comments
Assignees
Labels
Metrics Metric Component.

Comments

@ZeynepRuveyda
Copy link

Hi,
I am able to running the code but when I am working on the custom dataset, the code did not achive to detect the defect. I tried change the threshold values ,epoch numbers etc.. Do you have any advice for getting good result on my custom dataset?
Thanks,

@ZeynepRuveyda
Copy link
Author

My pixel_AUROC , pixel_F1Score values are always getting me 0. I don't understand where is the problem.

@samet-akcay
Copy link
Contributor

Hi @ZeynepRuveyda, can you share your config.yaml file here so we could reproduce?

@ZeynepRuveyda
Copy link
Author

ZeynepRuveyda commented Apr 26, 2022

dataset:
name: mvtec #options: [mvtec, btech, folder]
format: mvtec
path: ./datasets/MVTec
category: BACK
task: segmentation
image_size: 256
train_batch_size: 64
test_batch_size: 32
num_workers: 36
transform_config:
train: null
val: null
create_validation_set: false
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16

model:
name: padim
backbone: wide_resnet50_2
layers:
- layer1
- layer2
- layer3
normalization_method: min_max # options: [none, min_max, cdf]
threshold:
image_default: 3
pixel_default: 5
adaptive: true

metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC

project:
seed: 42
path: ./results
log_images_to: ["local"]
logger: false # options: [tensorboard, wandb, csv] or combinations.

optimization:
openvino:
apply: false
trainer:
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
accumulate_grad_batches: 1
amp_backend: native
auto_lr_find: false
auto_scale_batch_size: false
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1 # Don't validate before extracting features.
default_root_dir: null
detect_anomaly: false
deterministic: false
enable_checkpointing: true
enable_model_summary: true
enable_progress_bar: true
fast_dev_run: false
gpus: null # Set automatically
gradient_clip_val: 0
ipus: null
limit_predict_batches: 1.0
limit_test_batches: 1.0
limit_train_batches: 1.0
limit_val_batches: 1.0
log_every_n_steps: 50
max_epochs: 10
max_steps: -1
max_time: null
min_epochs: null
min_steps: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
num_nodes: 1
num_processes: 1
num_sanity_val_steps: 0
overfit_batches: 0.0
plugins: null
precision: 32
profiler: null
reload_dataloaders_every_n_epochs: 0
replace_sampler_ddp: true
sync_batchnorm: false
tpu_cores: null
track_grad_norm: -1
val_check_interval: 1.0 # Don't validate before extracting features.

@ZeynepRuveyda
Copy link
Author

Hi @samet-akcay , here is my config file.

@ZeynepRuveyda
Copy link
Author

Hi @samet-akcay , here is my config file.

My custom images looks like metal_nut images which in MVTech dataset.

@alexriedel1
Copy link
Contributor

Can you show you folder structure for the training and testing data? Sounds like your ground truth pixel masks aren't in place if you're getting 0 scores only for the pixel metrics. what are your image-wise metric results?

@samet-akcay
Copy link
Contributor

Can you show you folder structure for the training and testing data? Sounds like your ground truth pixel masks aren't in place if you're getting 0 scores only for the pixel metrics. what are your image-wise metric results?

+1 for this. @ZeynepRuveyda, did you also try other models? If so, did you get 0 for those as well?

@ZeynepRuveyda
Copy link
Author

Hi @alexriedel1 ,

image-wise metric results

───────────────────────────────────
image_AUROC 0.9785714149475098
image_F1Score 0.9523809552192688
pixel_AUROC 0.0
pixel_F1Score 0.0

.../zeynep/anomalib/datasets/MVTec/BACK/test/bad/000065.png
.../zeynep/anomalib/datasets/MVTec/BACK/train/good/000088.png
...zeynep/anomalib/datasets/MVTec/BACK/ground_truth/...

My ground_truth folder stay with train and test folder in the same directory.

@ZeynepRuveyda
Copy link
Author

Can you show you folder structure for the training and testing data? Sounds like your ground truth pixel masks aren't in place if you're getting 0 scores only for the pixel metrics. what are your image-wise metric results?

+1 for this. @ZeynepRuveyda, did you also try other models? If so, did you get 0 for those as well?

Yes,I tried the patchcore model too but I got the same result.

@alexriedel1
Copy link
Contributor

alexriedel1 commented Apr 27, 2022

Hi @alexriedel1 ,

image-wise metric results

─────────────────────────────────── image_AUROC 0.9785714149475098 image_F1Score 0.9523809552192688 pixel_AUROC 0.0 pixel_F1Score 0.0

.../zeynep/anomalib/datasets/MVTec/BACK/test/bad/000065.png .../zeynep/anomalib/datasets/MVTec/BACK/train/good/000088.png ...zeynep/anomalib/datasets/MVTec/BACK/ground_truth/...

My ground_truth folder stay with train and test folder in the same directory.

Unfortunately I cannot say if the ground truth masks are named according to your anomaly samples.
If your bad samples are in MVTec/BACK/test/bad your masks need to be in MVTec/BACK/ground_truth/bad. The masks have to have the same names as the bad samples.

@samet-akcay maybe the dataloaders need a warning (or info) if no ground truth masks are found..

@ZeynepRuveyda
Copy link
Author

Hi @alexriedel1 ,

image-wise metric results

─────────────────────────────────── image_AUROC 0.9785714149475098 image_F1Score 0.9523809552192688 pixel_AUROC 0.0 pixel_F1Score 0.0
.../zeynep/anomalib/datasets/MVTec/BACK/test/bad/000065.png .../zeynep/anomalib/datasets/MVTec/BACK/train/good/000088.png ...zeynep/anomalib/datasets/MVTec/BACK/ground_truth/...
My ground_truth folder stay with train and test folder in the same directory.

Unfortunately I cannot say if the ground truth masks are named according to your anomaly samples. If your bad samples are in MVTec/BACK/test/bad your masks need to be in MVTec/BACK/ground_truth/bad. The masks have to have the same names as the bad samples.

@samet-akcay maybe the dataloaders need a warning (or info) if no ground truth masks are found..

I think i missed the file in the directory. I exactly the same directory for ground_truth what you said.
.../zeynep/anomalib/datasets/MVTec/BACK/ground_truth/bad/000065_mask.png

@alexriedel1
Copy link
Contributor

alexriedel1 commented Apr 27, 2022

Can you post an image of the test inference from ./results/MVTec ? And please try to set test_batch_size: 32 to 1 (#268)

@samet-akcay
Copy link
Contributor

@ZeynepRuveyda, can you share the full training logs printed on the terminal? If there is any peculiarity in the masks, there could be some warnings there.

In addition, I'd also look at the visualized outputs, specifically to the anomaly heatmap. If it is correctly produced, this means the model should be able to successfully perform the segmentation.

@samet-akcay samet-akcay added the Metrics Metric Component. label Apr 28, 2022
@ZeynepRuveyda
Copy link
Author

@ZeynepRuveyda, can you share the full training logs printed on the terminal? If there is any peculiarity in the masks, there could be some warnings there.

In addition, I'd also look at the visualized outputs, specifically to the anomaly heatmap. If it is correctly produced, this means the model should be able to successfully perform the segmentation.

(
![000065(1)](https://user-images.githubusercontent.com/72027409/165713039-7495a580-c5a8-40cd-afeb-aa6dcaea85e9.png)
![000065](https://user-images.githubusercontent.com/72027409/165713187-356d8198-70e3-455b-b6d5-33445a280069.png)
env) marcantoine@pipeline-training:~/zeynep/anomalib$ python tools/train.py --model_config_path anomalib/models/padim/config.yaml
tools/train.py:64: DeprecationWarning: --model_config_path will be deprecated in v0.2.8 and removed in v0.2.9. Use --config instead.
  args = get_args()
2022-04-27 14:54:21,799 - pytorch_lightning.utilities.seed - INFO - Global seed set to 42
2022-04-27 14:54:21,800 - anomalib - INFO - Loading the datamodule
2022-04-27 14:54:21,801 - anomalib - INFO - Loading the model.
2022-04-27 14:54:21,806 - torch.distributed.nn.jit.instantiator - INFO - Created a temporary directory at /tmp/tmp4ybhprn6
2022-04-27 14:54:21,806 - torch.distributed.nn.jit.instantiator - INFO - Writing /tmp/tmp4ybhprn6/_remote_module_non_sriptable.py
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PrecisionRecallCurve` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `ROC` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
2022-04-27 14:54:21,829 - anomalib.models.padim.lightning_model - INFO - Initializing Padim Lightning model.
2022-04-27 14:54:22,124 - anomalib - INFO - Loading the experiment logger(s)
2022-04-27 14:54:22,124 - anomalib - INFO - Loading the callbacks
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True, used: True
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_predict_batches=1.0)` was configured so 100% of the batches will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(val_check_interval=1.0)` was configured so validation will run at the end of the training epoch..
2022-04-27 14:54:22,139 - anomalib - INFO - Training the model.
2022-04-27 14:54:22,144 - anomalib.data.mvtec - INFO - Found the dataset.
2022-04-27 14:54:22,145 - anomalib.data.mvtec - INFO - Setting up train, validation, test and prediction datasets.
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:608: UserWarning: Checkpoint directory /home/marcantoine/zeynep/anomalib/results/padim/mvtec/BACK/weights exists and is not empty.
  rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
2022-04-27 14:54:25,552 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/pytorch_lightning/core/optimizer.py:184: UserWarning: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer
  "`LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer",
2022-04-27 14:54:25,555 - pytorch_lightning.callbacks.model_summary - INFO - 
  | Name                  | Type                     | Params
-------------------------------------------------------------------
0 | image_threshold       | AdaptiveThreshold        | 0     
1 | pixel_threshold       | AdaptiveThreshold        | 0     
2 | training_distribution | AnomalyScoreDistribution | 0     
3 | min_max               | MinMax                   | 0     
4 | image_metrics         | AnomalibMetricCollection | 0     
5 | pixel_metrics         | AnomalibMetricCollection | 0     
6 | model                 | PadimModel               | 11.7 M
-------------------------------------------------------------------
11.7 M    Trainable params
0         Non-trainable params
11.7 M    Total params
46.758    Total estimated model params size (MB)
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torch/utils/data/dataloader.py:490: UserWarning: This DataLoader will create 36 worker processes in total. Our suggested max number of worker in current system is 8, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  cpuset_checked))
Epoch 0:   0%|                                           | 0/29 [00:00<?, ?it/s]/home/marcantoine/zeynep/env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py:137: UserWarning: `training_step` returned `None`. If this was on purpose, ignore this warning...
  self.warning_cache.warn("`training_step` returned `None`. If this was on purpose, ignore this warning...")
Epoch 0:   7%|█▋              2022-04-27 14:54:31,469 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:54:31,550 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 0: 100%|████████████████████████| 29/29 [00:32<00:00,  1.13s/it, loss=nan]/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: No positive samples in targets, true positive value should be meaningless. Returning zero tensor in true positive score
  warnings.warn(*args, **kwargs)
Epoch 1:   7%|█▋              2022-04-27 14:55:08,411 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:55:08,570 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 2:   7%|█▋              2022-04-27 14:55:49,556 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:55:49,794 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 3:   7%|█▋              2022-04-27 14:56:44,863 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:56:45,216 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 4:   7%|█▋              2022-04-27 14:57:54,489 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:57:55,263 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 4: 100%|████████████████████████| 29/29 [05:07<00:00, 10.59s/it, loss=nan]
2022-04-27 14:59:33,527 - anomalib.utils.callbacks.timer - INFO - Training took 307.97 seconds
2022-04-27 14:59:33,528 - anomalib - INFO - Loading the best model weights.
2022-04-27 14:59:33,528 - anomalib - INFO - Testing the model.
2022-04-27 14:59:33,533 - anomalib.data.mvtec - INFO - Found the dataset.
2022-04-27 14:59:33,533 - anomalib.data.mvtec - INFO - Setting up train, validation, test and prediction datasets.
2022-04-27 14:59:33,924 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
2022-04-27 14:59:33,927 - anomalib.utils.callbacks.model_loader - INFO - Loading the model from /home/marcantoine/zeynep/anomalib/results/padim/mvtec/BACK/weights/model-v11.ckpt
Testing DataLoader 0:   0%|                              | 0/27 [00:02<?, ?it/s]/home/marcantoine/zeynep/anomalib/anomalib/utils/callbacks/visualizer_callback.py:88: UserWarning: local not in the list of supported image loggers.
  warn(f"{log_to} not in the list of supported image loggers.")
Testing DataLoader 0: 100%|█████████████████████| 27/27 [00:20<00:00,  1.31it/s]2022-04-27 14:59:55,723 - anomalib.utils.callbacks.timer - INFO - Testing took 20.82998275756836 seconds
Throughput (batch_size=1) : 1.2962084661443047 FPS
Testing DataLoader 0: 100%|█████████████████████| 27/27 [00:20<00:00,  1.30it/s]
────────────────────────────────────────────────────────────────────────────────
       Test metric             DataLoader 0
────────────────────────────────────────────────────────────────────────────────
       image_AUROC          0.7857142686843872
      image_F1Score         0.8510637879371643
       pixel_AUROC                  0.0
      pixel_F1Score                 0.0
────────────────────────────────────────────────────────────────────────────────
(env) marcantoine@pipeline-training:~/zeynep/anomalib$ 

@ZeynepRuveyda
Copy link
Author

ZeynepRuveyda commented Apr 28, 2022

@ZeynepRuveyda, can you share the full training logs printed on the terminal? If there is any peculiarity in the masks, there could be some warnings there.
In addition, I'd also look at the visualized outputs, specifically to the anomaly heatmap. If it is correctly produced, this means the model should be able to successfully perform the segmentation.

(
![000065(1)](https://user-images.githubusercontent.com/72027409/165713039-7495a580-c5a8-40cd-afeb-aa6dcaea85e9.png)
![000065](https://user-images.githubusercontent.com/72027409/165713187-356d8198-70e3-455b-b6d5-33445a280069.png)
env) marcantoine@pipeline-training:~/zeynep/anomalib$ python tools/train.py --model_config_path anomalib/models/padim/config.yaml
tools/train.py:64: DeprecationWarning: --model_config_path will be deprecated in v0.2.8 and removed in v0.2.9. Use --config instead.
  args = get_args()
2022-04-27 14:54:21,799 - pytorch_lightning.utilities.seed - INFO - Global seed set to 42
2022-04-27 14:54:21,800 - anomalib - INFO - Loading the datamodule
2022-04-27 14:54:21,801 - anomalib - INFO - Loading the model.
2022-04-27 14:54:21,806 - torch.distributed.nn.jit.instantiator - INFO - Created a temporary directory at /tmp/tmp4ybhprn6
2022-04-27 14:54:21,806 - torch.distributed.nn.jit.instantiator - INFO - Writing /tmp/tmp4ybhprn6/_remote_module_non_sriptable.py
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PrecisionRecallCurve` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `ROC` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
2022-04-27 14:54:21,829 - anomalib.models.padim.lightning_model - INFO - Initializing Padim Lightning model.
2022-04-27 14:54:22,124 - anomalib - INFO - Loading the experiment logger(s)
2022-04-27 14:54:22,124 - anomalib - INFO - Loading the callbacks
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True, used: True
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_predict_batches=1.0)` was configured so 100% of the batches will be used..
2022-04-27 14:54:22,139 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(val_check_interval=1.0)` was configured so validation will run at the end of the training epoch..
2022-04-27 14:54:22,139 - anomalib - INFO - Training the model.
2022-04-27 14:54:22,144 - anomalib.data.mvtec - INFO - Found the dataset.
2022-04-27 14:54:22,145 - anomalib.data.mvtec - INFO - Setting up train, validation, test and prediction datasets.
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:608: UserWarning: Checkpoint directory /home/marcantoine/zeynep/anomalib/results/padim/mvtec/BACK/weights exists and is not empty.
  rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
2022-04-27 14:54:25,552 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/pytorch_lightning/core/optimizer.py:184: UserWarning: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer
  "`LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer",
2022-04-27 14:54:25,555 - pytorch_lightning.callbacks.model_summary - INFO - 
  | Name                  | Type                     | Params
-------------------------------------------------------------------
0 | image_threshold       | AdaptiveThreshold        | 0     
1 | pixel_threshold       | AdaptiveThreshold        | 0     
2 | training_distribution | AnomalyScoreDistribution | 0     
3 | min_max               | MinMax                   | 0     
4 | image_metrics         | AnomalibMetricCollection | 0     
5 | pixel_metrics         | AnomalibMetricCollection | 0     
6 | model                 | PadimModel               | 11.7 M
-------------------------------------------------------------------
11.7 M    Trainable params
0         Non-trainable params
11.7 M    Total params
46.758    Total estimated model params size (MB)
/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torch/utils/data/dataloader.py:490: UserWarning: This DataLoader will create 36 worker processes in total. Our suggested max number of worker in current system is 8, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  cpuset_checked))
Epoch 0:   0%|                                           | 0/29 [00:00<?, ?it/s]/home/marcantoine/zeynep/env/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py:137: UserWarning: `training_step` returned `None`. If this was on purpose, ignore this warning...
  self.warning_cache.warn("`training_step` returned `None`. If this was on purpose, ignore this warning...")
Epoch 0:   7%|█▋              2022-04-27 14:54:31,469 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:54:31,550 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 0: 100%|████████████████████████| 29/29 [00:32<00:00,  1.13s/it, loss=nan]/home/marcantoine/zeynep/env/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: No positive samples in targets, true positive value should be meaningless. Returning zero tensor in true positive score
  warnings.warn(*args, **kwargs)
Epoch 1:   7%|█▋              2022-04-27 14:55:08,411 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:55:08,570 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 2:   7%|█▋              2022-04-27 14:55:49,556 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:55:49,794 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 3:   7%|█▋              2022-04-27 14:56:44,863 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:56:45,216 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 4:   7%|█▋              2022-04-27 14:57:54,489 - anomalib.models.padim.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-04-27 14:57:55,263 - anomalib.models.padim.lightning_model - INFO - Fitting a Gaussian to the embedding collected from the training set.
Epoch 4: 100%|████████████████████████| 29/29 [05:07<00:00, 10.59s/it, loss=nan]
2022-04-27 14:59:33,527 - anomalib.utils.callbacks.timer - INFO - Training took 307.97 seconds
2022-04-27 14:59:33,528 - anomalib - INFO - Loading the best model weights.
2022-04-27 14:59:33,528 - anomalib - INFO - Testing the model.
2022-04-27 14:59:33,533 - anomalib.data.mvtec - INFO - Found the dataset.
2022-04-27 14:59:33,533 - anomalib.data.mvtec - INFO - Setting up train, validation, test and prediction datasets.
2022-04-27 14:59:33,924 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
2022-04-27 14:59:33,927 - anomalib.utils.callbacks.model_loader - INFO - Loading the model from /home/marcantoine/zeynep/anomalib/results/padim/mvtec/BACK/weights/model-v11.ckpt
Testing DataLoader 0:   0%|                              | 0/27 [00:02<?, ?it/s]/home/marcantoine/zeynep/anomalib/anomalib/utils/callbacks/visualizer_callback.py:88: UserWarning: local not in the list of supported image loggers.
  warn(f"{log_to} not in the list of supported image loggers.")
Testing DataLoader 0: 100%|█████████████████████| 27/27 [00:20<00:00,  1.31it/s]2022-04-27 14:59:55,723 - anomalib.utils.callbacks.timer - INFO - Testing took 20.82998275756836 seconds
Throughput (batch_size=1) : 1.2962084661443047 FPS
Testing DataLoader 0: 100%|█████████████████████| 27/27 [00:20<00:00,  1.30it/s]
────────────────────────────────────────────────────────────────────────────────
       Test metric             DataLoader 0
────────────────────────────────────────────────────────────────────────────────
       image_AUROC          0.7857142686843872
      image_F1Score         0.8510637879371643
       pixel_AUROC                  0.0
      pixel_F1Score                 0.0
────────────────────────────────────────────────────────────────────────────────
(env) marcantoine@pipeline-training:~/zeynep/anomalib$ 

Yesterday , I got some result for the segmentation and heatmap. But my pixel_AUROC and pixel_F1Score are still 0. Is it problem?

@alexriedel1
Copy link
Contributor

Are your ground truth masks binary files only containting 0's and 1's ?
UserWarning: No positive samples in targets, true positive value should be meaningless. Returning zero tensor in true positive score sounds to me like there is something wrong with your masks...

@djdameln
Copy link
Contributor

sounds to me like there is something wrong with your masks...

That's what I suspect as well. I can replicate the issue by using a value other than 1.0 for the anomalous regions in the GT masks. Could you provide an example of one of the ground truth mask files of your dataset?

@ZeynepRuveyda
Copy link
Author

I think my annotations are just for the classification. I double checked it then you are right. It can be change depend on your mask info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Metrics Metric Component.
Projects
None yet
Development

No branches or pull requests

4 participants