Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FileNotFoundError: [Errno 2] No such file or directory: 'results/padim/mvtec/bottle/results/padim/mvtec/bottle/weights/model.ckpt' #406

Closed
billyc97 opened this issue Jul 5, 2022 · 4 comments · Fixed by #422
Assignees
Labels
Bug Something isn't working Inference

Comments

@billyc97
Copy link

billyc97 commented Jul 5, 2022

Before going into this bug, the sample command under the Inference section of the gave the following sample command:

python tools/inference.py
--config anomalib/models/padim/config.yaml
--weight_path results/padim/mvtec/bottle/weights/model.ckpt
--image_path datasets/MVTec/bottle/test/broken_large/000.png

But there is no such file as inference.py, so I assume it should be tools/inference/lightning.py

Steps to reproduce the behavior:

  1. pip install anomalib==0.3.3
  2. python tools/train.py
  3. mkdir test_result
  4. python tools/inference/lightning.py
    --config anomalib/models/padim/config.yaml
    --weight_path results/padim/mvtec/bottle/weights/model.ckpt
    --image_path datasets/MVTec/bottle/test/broken_large/000.png
    --save_path test_result

Error Code:

/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (AdaptiveThreshold). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric PrecisionRecallCurve will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (AnomalyScoreDistribution). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (MinMax). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used..
Trainer(limit_val_batches=1.0) was configured so 100% of the batches will be used..
Trainer(limit_test_batches=1.0) was configured so 100% of the batches will be used..
Trainer(limit_predict_batches=1.0) was configured so 100% of the batches will be used..
Trainer(val_check_interval=1.0) was configured so validation will run at the end of the training epoch..
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric ROC will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Traceback (most recent call last):
File "tools/inference/lightning.py", line 72, in
infer()
File "tools/inference/lightning.py", line 68, in infer
trainer.predict(model=model, dataloaders=[dataloader])
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1026, in predict
self._predict_impl, model, dataloaders, datamodule, return_predictions, ckpt_path
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1072, in _predict_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
results = self._run_stage()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1322, in _run_stage
return self._run_predict()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1381, in _run_predict
return self.predict_loop.run()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 199, in run
self.on_run_start(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/dataloader/prediction_loop.py", line 84, in on_run_start
self._on_predict_start()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/dataloader/prediction_loop.py", line 123, in _on_predict_start
self.trainer._call_callback_hooks("on_predict_start")
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1636, in _call_callback_hooks
fn(self, self.lightning_module, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/anomalib/utils/callbacks/model_loader.py", line 49, in on_predict_start
pl_module.load_state_dict(torch.load(self.weights_path)["state_dict"])
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 699, in load
with _open_file_like(f, 'rb') as opened_file:
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 231, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 212, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'results/padim/mvtec/bottle/results/padim/mvtec/bottle/weights/model.ckpt'

Not sure what is causing the directory to be modified like that.

Hardware and Software Configuration

  • The program is run on Google Colab
@Liamkk

This comment was marked as outdated.

@Liamkk
Copy link

Liamkk commented Jul 8, 2022

--weight_path weights/model.ckpt

@samet-akcay samet-akcay self-assigned this Jul 8, 2022
@samet-akcay samet-akcay added Bug Something isn't working Inference labels Jul 8, 2022
@billyc97
Copy link
Author

billyc97 commented Jul 8, 2022

@Liamkk

--weight_path weights/model.ckpt

This does temporarily fix the issue, but I am getting another issue after running the code.

To run the code:

python tools/inference/lightning.py
--config anomalib/models/padim/config.yaml
--weight_path weights/model.ckpt
--image_path datasets/MVTec/bottle/test/broken_large/000.png
--save_path test_result

Error code:

/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (AdaptiveThreshold). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric PrecisionRecallCurve will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (AnomalyScoreDistribution). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (MinMax). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used..
Trainer(limit_val_batches=1.0) was configured so 100% of the batches will be used..
Trainer(limit_test_batches=1.0) was configured so 100% of the batches will be used..
Trainer(limit_predict_batches=1.0) was configured so 100% of the batches will be used..
Trainer(val_check_interval=1.0) was configured so validation will run at the end of the training epoch..
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric ROC will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Predicting DataLoader 0: 0% 0/1 [00:00<?, ?it/s]qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/usr/local/lib/python3.7/dist-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

But the results are saved in the specified "test_result" directory, not sure if anyone else face this issue.

@samet-akcay
Copy link
Contributor

@billyc97, are you running anomalib on a server. lightning script tries to display the output image(s), and this error occurs. If you don't want to show the output images, you could disable it

python tools/inference/lightning.py
--config anomalib/models/padim/config.yaml
--weight_path weights/model.ckpt
--image_path datasets/MVTec/bottle/test/broken_large/000.png
--save_path test_result
--disable_show_images

By the way we just merged a fix for some of the inference stuff, which also renames lightning.py to lightning_inference.py. This new inference approach is yet to be stable, and subject to change. Apologies for inconvenience

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working Inference
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants