You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-when i using the patchcore to training data(MVTec bottle), there appeared some error, just like this---Validation: 0it [00:00, ?it/s], the process can't continue
dataset:
name: mvtec #options: [mvtec, btech, folder]format: mvtecpath: D:/PythonProject/anomalib/datasets/MVTectask: segmentationcategory: bottleimage_size: 224train_batch_size: 32test_batch_size: 1num_workers: 8transform_config:
train: nullval: nullcreate_validation_set: falsetiling:
apply: falsetile_size: nullstride: nullremove_border_count: 0use_random_tiling: Falserandom_tile_count: 16model:
name: patchcorebackbone: wide_resnet50_2pre_trained: truelayers:
- layer2
- layer3coreset_sampling_ratio: 0.1num_neighbors: 9normalization_method: min_max # options: [null, min_max, cdf]metrics:
image:
- F1Score
- AUROCpixel:
- F1Score
- AUROCthreshold:
image_default: 0pixel_default: 0adaptive: truevisualization:
show_images: False # show images on the screensave_images: True # save images to the file systemlog_images: True # log images to the available loggers (if any)image_save_path: null # path to which images will be savedmode: full # options: ["full", "simple"]project:
seed: 0path: ./resultslogging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.log_graph: false # Logs the model graph to respective logger.optimization:
export_mode: null # options: onnx, openvino# PL Trainer Args. Don't add extra parameter here.trainer:
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">accumulate_grad_batches: 1amp_backend: nativeauto_lr_find: falseauto_scale_batch_size: falseauto_select_gpus: falsebenchmark: falsecheck_val_every_n_epoch: 1# Don't validate before extracting features.default_root_dir: nulldetect_anomaly: falsedeterministic: falsedevices: 1enable_checkpointing: trueenable_model_summary: trueenable_progress_bar: truefast_dev_run: falsegpus: null # Set automaticallygradient_clip_val: 0ipus: nulllimit_predict_batches: 1.0limit_test_batches: 1.0limit_train_batches: 1.0limit_val_batches: 1.0log_every_n_steps: 50log_gpu_memory: nullmax_epochs: 1max_steps: -1max_time: nullmin_epochs: nullmin_steps: nullmove_metrics_to_cpu: falsemultiple_trainloader_mode: max_size_cyclenum_nodes: 1num_processes: nullnum_sanity_val_steps: 0overfit_batches: 0.0plugins: nullprecision: 32profiler: nullreload_dataloaders_every_n_epochs: 0replace_sampler_ddp: truestrategy: nullsync_batchnorm: falsetpu_cores: nulltrack_grad_norm: -1val_check_interval: 1.0# Don't validate before extracting features.
Transform configs has not been provided. Images will be normalized using ImageNet statistics.
Transform configs has not been provided. Images will be normalized using ImageNet statistics.
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torch\utils\data\dataloader.py:557: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (cpuset is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(create_warning_msg(
dict_keys(['image', 'image_path', 'label', 'mask_path', 'mask'])
torch.Size([1, 3, 224, 224])
torch.Size([1, 224, 224])
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric PrecisionRecallCurve will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
D:\PythonProject\anomalib\anomalib\utils\callbacks_init.py:133: UserWarning: Export option: None not found. Defaulting to no model export
warnings.warn(f"Export option: {config.optimization.export_mode} not found. Defaulting to no model export")
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used.. Trainer(limit_val_batches=1.0) was configured so 100% of the batches will be used.. Trainer(limit_test_batches=1.0) was configured so 100% of the batches will be used.. Trainer(limit_predict_batches=1.0) was configured so 100% of the batches will be used.. Trainer(val_check_interval=1.0) was configured so validation will run at the end of the training epoch..
Missing logger folder: results\patchcore\mvtec\bottle\lightning_logs
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric ROC will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\pytorch_lightning\core\optimizer.py:183: UserWarning: LightningModule.configure_optimizers returned None, this fit will run with no optimizer
rank_zero_warn(
24.9 M Trainable params
0 Non-trainable params
24.9 M Total params
99.450 Total estimated model params size (MB)
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torch\utils\data\dataloader.py:557: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (cpuset is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1933: PossibleUserWarning: The number of training batches (7) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
rank_zero_warn(
Epoch 0: 1%| | 1/90 [01:07<1:40:34, 67.80s/it, loss=nan, v_num=0]C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:137: UserWarning: training_step returned None. If this was on purpose, ignore this warning...
self.warning_cache.warn("training_step returned None. If this was on purpose, ignore this warning...")
Epoch 0: 8%|▊ | 7/90 [01:51<22:00, 15.91s/it, loss=nan, v_num=0]
Validation: 0it [00:00, ?it/s]
Screenshots
If applicable, add screenshots to help explain your problem.
Hardware and Software Configuration
OS: [Ubuntu, OD]
NVIDIA Driver Version [470.57.02]
CUDA Version [e.g. 11.4]
CUDNN Version [e.g. v11.4.120]
OpenVINO Version [Optional e.g. v2021.4.2]
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
-when i using the patchcore to training data(MVTec bottle), there appeared some error, just like this---Validation: 0it [00:00, ?it/s], the process can't continue
To Reproduce
Steps to reproduce the behavior:
nothing
Expected behavior
C:\Users\fx50j.conda\envs\anomalib_env\python.exe D:/PythonProject/anomalib/tools/MyTest.py
1.12.0+cpu
None
None
False
0
Transform configs has not been provided. Images will be normalized using ImageNet statistics.
Transform configs has not been provided. Images will be normalized using ImageNet statistics.
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torch\utils\data\dataloader.py:557: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (
cpuset
is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.warnings.warn(create_warning_msg(
dict_keys(['image', 'image_path', 'label', 'mask_path', 'mask'])
torch.Size([1, 3, 224, 224])
torch.Size([1, 224, 224])
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric
PrecisionRecallCurve
will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.warnings.warn(*args, **kwargs)
D:\PythonProject\anomalib\anomalib\utils\callbacks_init.py:133: UserWarning: Export option: None not found. Defaulting to no model export
warnings.warn(f"Export option: {config.optimization.export_mode} not found. Defaulting to no model export")
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0)
was configured so 100% of the batches per epoch will be used..Trainer(limit_val_batches=1.0)
was configured so 100% of the batches will be used..Trainer(limit_test_batches=1.0)
was configured so 100% of the batches will be used..Trainer(limit_predict_batches=1.0)
was configured so 100% of the batches will be used..Trainer(val_check_interval=1.0)
was configured so validation will run at the end of the training epoch..Missing logger folder: results\patchcore\mvtec\bottle\lightning_logs
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric
ROC
will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.warnings.warn(*args, **kwargs)
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\pytorch_lightning\core\optimizer.py:183: UserWarning:
LightningModule.configure_optimizers
returnedNone
, this fit will run with no optimizerrank_zero_warn(
| Name | Type | Params
0 | image_threshold | AdaptiveThreshold | 0
1 | pixel_threshold | AdaptiveThreshold | 0
2 | model | PatchcoreModel | 24.9 M
3 | image_metrics | AnomalibMetricCollection | 0
4 | pixel_metrics | AnomalibMetricCollection | 0
5 | normalization_metrics | MinMax | 0
24.9 M Trainable params
0 Non-trainable params
24.9 M Total params
99.450 Total estimated model params size (MB)
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\torch\utils\data\dataloader.py:557: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (
cpuset
is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.warnings.warn(_create_warning_msg(
C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1933: PossibleUserWarning: The number of training batches (7) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
rank_zero_warn(
Epoch 0: 1%| | 1/90 [01:07<1:40:34, 67.80s/it, loss=nan, v_num=0]C:\Users\fx50j.conda\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:137: UserWarning:
training_step
returnedNone
. If this was on purpose, ignore this warning...self.warning_cache.warn("
training_step
returnedNone
. If this was on purpose, ignore this warning...")Epoch 0: 8%|▊ | 7/90 [01:51<22:00, 15.91s/it, loss=nan, v_num=0]
Validation: 0it [00:00, ?it/s]
Screenshots
Hardware and Software Configuration
Additional context
The text was updated successfully, but these errors were encountered: