Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

STFPM and CFLOW torch inference error #473

Closed
ke-dev opened this issue Aug 2, 2022 · 1 comment · Fixed by #475
Closed

STFPM and CFLOW torch inference error #473

ke-dev opened this issue Aug 2, 2022 · 1 comment · Fixed by #475

Comments

@ke-dev
Copy link

ke-dev commented Aug 2, 2022

Describe the solution you'd like
hi developer, when I use CFLOW and STFPM for torch inference, I encountered the following error, but using PatchCore and PaDim can run normally
the command and error 1
python tools/inference/torch_inference.py \ --config anomalib/models/stfpm/config.yaml \ --weights results/stfpm/mvtec/carpet/weights/model.ckpt \ --input datasets/MVTec/carpet/test/defect_01/ \ --output results/test_img/stfpm/defect_01

Traceback (most recent call last): File "tools/inference/torch_inference.py", line 104, in <module> infer() File "tools/inference/torch_inference.py", line 82, in infer predictions = inferencer.predict(image=image) File "/anomalib/anomalib/anomalib/anomalib/deploy/inferencers/base_inferencer.py", line 99, in predict output = self.post_process(predictions, meta_data=meta_data) File "/anomalib/anomalib/anomalib/anomalib/deploy/inferencers/torch_inferencer.py", line 146, in post_process anomaly_map = predictions.cpu().numpy() RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

the command and error 2
python tools/inference/torch_inference.py \ --config anomalib/models/cflow/config.yaml \ --weights results/cflow/mvtec/carpet/weights/model.ckpt \ --input datasets/MVTec/carpet/test/defect_01/ \ --output results/test_img/cflow/defect_01

Traceback (most recent call last): File "tools/inference/torch_inference.py", line 104, in <module> infer() File "tools/inference/torch_inference.py", line 82, in infer predictions = inferencer.predict(image=image) File "/anomalib/anomalib/anomalib/anomalib/deploy/inferencers/base_inferencer.py", line 99, in predict output = self.post_process(predictions, meta_data=meta_data) File "/anomalib/anomalib/anomalib/anomalib/deploy/inferencers/torch_inferencer.py", line 174, in post_process anomaly_map, pred_score = self._normalize(anomaly_map, pred_score, meta_data) File "/anomalib/anomalib/anomalib/anomalib/deploy/inferencers/base_inferencer.py", line 166, in _normalize pred_scores = normalize_min_max( File "/anomalib/anomalib/anomalib/anomalib/post_processing/normalization/min_max.py", line 39, in normalize raise ValueError(f"Targets must be either Tensor or Numpy array. Received {type(targets)}") ValueError: Targets must be either Tensor or Numpy array. Received <class 'numpy.float64'>

@samet-akcay
Copy link
Contributor

@ke-dev PR #475 fixes the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants