Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📃 Add documentation for gradio inference #427

Merged
merged 2 commits into from
Jul 11, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,16 @@ python tools/inference/openvino_inference.py \

> Ensure that you provide path to `meta_data.json` if you want the normalization to be applied correctly.

You can also use Gradio Inference to interact with the trained models using a UI. Refer to our [guide](https://openvinotoolkit.github.io/anomalib/guides/inference.html#gradio-inference) for more details.

A quick example:

```bash
python tools/inference/gradio_inference.py \
--config ./anomalib/models/padim/config.yaml \
--weights ./results/padim/mvtec/bottle/weights/model.ckpt
```

## Hyperparameter Optimization

To run hyperparameter optimization, use the following command:
Expand Down
71 changes: 56 additions & 15 deletions docs/source/guides/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ PyTorch (Lightning) Inference
The entrypoint script in ``tools/inference/lightning.py`` can be used to run inference with a trained PyTorch model. The script runs inference by loading a previously trained model into a PyTorch Lightning trainer and running the ``predict sequence``. The entrypoint script has several command line arguments that can be used to configure inference:

+---------------------+----------+---------------------------------------------------------------------------------+
| Parameter | Required | Description |
| Parameter | Required | Description |
+=====================+==========+=================================================================================+
| config | True | Path to the model config file. |
+---------------------+----------+---------------------------------------------------------------------------------+
Expand All @@ -37,20 +37,20 @@ OpenVINO Inference
==============
To run OpenVINO inference, first make sure that your model has been exported to the OpenVINO IR format. Once the model has been exported, OpenVINO inference can be triggered by running the OpenVINO entrypoint script in ``tools/inference/openvino.py``. The command line arguments are very similar to PyTorch inference entrypoint script:

+-------------+----------+-------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+=============+==========+=====================================================================================+
| config | True | Path to the model config file. |
+-------------+----------+-------------------------------------------------------------------------------------+
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-------------+----------+-------------------------------------------------------------------------------------+
| image | True | Path to the image source. This can be a single image or a folder of images. |
+-------------+----------+-------------------------------------------------------------------------------------+
| save_data | False | Path to which the output images should be saved. Leave empty for live visualization.|
+-------------+----------+-------------------------------------------------------------------------------------+
| meta_data | True | Path to the JSON file containing the model's meta data (e.g. normalization |
| | | parameters and anomaly score threshold). |
+-------------+----------+-------------------------------------------------------------------------------------+
+-----------+----------+--------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+===========+==========+======================================================================================+
| config | True | Path to the model config file. |
+-----------+----------+--------------------------------------------------------------------------------------+
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-----------+----------+--------------------------------------------------------------------------------------+
| image | True | Path to the image source. This can be a single image or a folder of images. |
+-----------+----------+--------------------------------------------------------------------------------------+
| save_data | False | Path to which the output images should be saved. Leave empty for live visualization. |
+-----------+----------+--------------------------------------------------------------------------------------+
| meta_data | True | Path to the JSON file containing the model's meta data (e.g. normalization |
| | | parameters and anomaly score threshold). |
+-----------+----------+--------------------------------------------------------------------------------------+

For correct inference results, the ``meta_data`` argument should be specified and point to the ``meta_data.json`` file that was generated when exporting the OpenVINO IR model. The file is stored in the same folder as the ``.xml`` and ``.bin`` files of the model.

Expand All @@ -59,3 +59,44 @@ As an example, OpenVINO inference can be triggered by the following command:
``python tools/inference/openvino.py --config padim.yaml --weights results/openvino/model.xml --input image.png --meta_data results/openvino/meta_data.json``

Similar to PyTorch inference, the visualization results will be displayed on the screen, and optionally saved to the file system location specified by the ``save_data`` parameter.



Gradio Inference
==============

The gradio inference is supported for both PyTorch and OpenVINO models.

+-----------+----------+------------------------------------------------------------------+
| Parameter | Required | Description |
+===========+==========+==================================================================+
| config | True | Path to the model config file. |
+-----------+----------+------------------------------------------------------------------+
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-----------+----------+------------------------------------------------------------------+
| meta_data | False | Path to the JSON file containing the model's meta data. |
| | | This is needed only for OpenVINO model. |
+-----------+----------+------------------------------------------------------------------+
| threshold | False | Threshold value used for identifying anomalies. Range 1-100. |
+-----------+----------+------------------------------------------------------------------+
| share | False | Share Gradio `share_url` |
+-----------+----------+------------------------------------------------------------------+

To use gradio with OpenVINO model, first make sure that your model has been exported to the OpenVINO IR format and ensure that the `meta_data` argument points to the ``meta_data.json`` file that was generated when exporting the OpenVINO IR model. The file is stored in the same folder as the ``.xml`` and ``.bin`` files of the model.

As an example, PyTorch model can be used by the following command:

.. code-block:: bash

python tools/inference/gradio_inference.py \
--config ./anomalib/models/padim/config.yaml \
--weights ./results/padim/mvtec/bottle/weights/model.ckpt
Comment on lines +91 to +93
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe another PR, but I think we should use the lightning inference to handle the meta-data automatically. Otherwise, we manually need to tune the threshold on the UI

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this requires refactor but we should load the initial value of threshold from the model/metadata json file. With that as the default value, the users can then play around with the threshold.


Similarly, you can use OpenVINO model by the following command:

.. code-block:: bash

python python tools/inference/gradio_inference.py \
--config ./anomalib/models/padim/config.yaml \
--weights ./results/padim/mvtec/bottle/openvino/openvino_model.onnx \
--meta_data ./results/padim/mvtec/bottle/openvino/meta_data.json
2 changes: 1 addition & 1 deletion tools/inference/gradio_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optiona

elif extension in (".onnx", ".bin", ".xml"):
openvino_inferencer = getattr(module, "OpenVINOInferencer")
inferencer = openvino_inferencer(config=config_path, path=weight_path, meta_data_path=meta_data_path)
inferencer = openvino_inferencer(config=config, path=weight_path, meta_data_path=meta_data_path)

else:
raise ValueError(
Expand Down