Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference Bug - Too many variables to unpack #56

Closed
marvision-ai opened this issue Jan 4, 2022 · 4 comments · Fixed by #60
Closed

Inference Bug - Too many variables to unpack #56

marvision-ai opened this issue Jan 4, 2022 · 4 comments · Fixed by #60
Assignees
Labels
Bug Something isn't working
Projects

Comments

@marvision-ai
Copy link

marvision-ai commented Jan 4, 2022

Hello, thank you for the great repo!

Describe the bug
I cannot run inference.

To Reproduce
Steps to reproduce the behavior:
conda install openvino-ie4py-ubuntu20 -c intel
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

python tools/inference.py \
    --model_config_path anomalib/models/padim/config.yaml \
    --weight_path results/padim/mvtec/leather/weights/model.ckpt \
    --image_path datasets/MVTec/leather/test/color/000.png
  • If applicable, add screenshots to help explain your problem.
Traceback (most recent call last):
  File "tools/inference.py", line 90, in <module>
    infer()
  File "tools/inference.py", line 78, in infer
    output = inference.predict(image=args.image_path, superimpose=True)
  File "anomalib/anomalib/core/model/inference.py", line 90, in predict
    anomaly_map, pred_score = self.post_process(predictions, meta_data=meta_data)
ValueError: too many values to unpack (expected 2)

Hardware and Software Configuration

  • OS: [Ubuntu, 20]
  • NVIDIA Driver Version [470.57.02]
  • CUDA Version [e.g. 11.4]
  • CUDNN Version [e.g. v11.4.120]
  • OpenVINO Version [Optional e.g. v2021.4.2]

Additional context
I also noticed that when I do inference from a ckpt file it still imports openvino- i expected this not to since it should run purely in pytorch.

@samet-akcay
Copy link
Contributor

Thanks for spotting this @marvision-ai! There have been some changes in the OpenVinoInferencer. These changes should also be applied to TorchInferencer in post_process method. Any thoughts @djdameln, @ashwinvaidya17?

@ashwinvaidya17
Copy link
Collaborator

@samet-akcay I have fixed this in PR #17
https://github.com/openvinotoolkit/anomalib/blob/feature/ashwin/benchmarking_tools/anomalib/core/model/inference.py#L185

@samet-akcay samet-akcay linked a pull request Jan 5, 2022 that will close this issue
9 tasks
@xxl007
Copy link

xxl007 commented Jan 5, 2022

Thanks for the great repo. Reporting the same issue.

@ashwinvaidya17 ashwinvaidya17 removed a link to a pull request Jan 6, 2022
9 tasks
@ashwinvaidya17 ashwinvaidya17 linked a pull request Jan 6, 2022 that will close this issue
@marvision-ai
Copy link
Author

Thank you @samet-akcay and @ashwinvaidya17 for very fast turn around time!

Is there a reason why when I want to load and infer with a ckpt file it still needs to import openvino? The documentation states that purely for Pytorch.

If the specified weight path points to a PyTorch Lightning checkpoint file (.ckpt), inference will run in PyTorch. If the path points to an ONNX graph (.onnx) or OpenVINO IR (.bin or .xml), inference will run in OpenVINO.

@samet-akcay samet-akcay added the Bug Something isn't working label Jan 7, 2022
@samet-akcay samet-akcay added this to To do in Bug fixes via automation Jan 7, 2022
Bug fixes automation moved this from To do to Done Jan 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

4 participants