Skip to content

Commit

Permalink
📃 Add documentation for gradio inference (#427)
Browse files Browse the repository at this point in the history
* Add documentation for gradio inference

* Minor edits
  • Loading branch information
ashwinvaidya17 committed Jul 11, 2022
1 parent 00492cf commit b094410
Show file tree
Hide file tree
Showing 3 changed files with 67 additions and 16 deletions.
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,16 @@ python tools/inference/openvino_inference.py \

> Ensure that you provide path to `meta_data.json` if you want the normalization to be applied correctly.
You can also use Gradio Inference to interact with the trained models using a UI. Refer to our [guide](https://openvinotoolkit.github.io/anomalib/guides/inference.html#gradio-inference) for more details.

A quick example:

```bash
python tools/inference/gradio_inference.py \
--config ./anomalib/models/padim/config.yaml \
--weights ./results/padim/mvtec/bottle/weights/model.ckpt
```

## Hyperparameter Optimization

To run hyperparameter optimization, use the following command:
Expand Down
71 changes: 56 additions & 15 deletions docs/source/guides/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ PyTorch (Lightning) Inference
The entrypoint script in ``tools/inference/lightning.py`` can be used to run inference with a trained PyTorch model. The script runs inference by loading a previously trained model into a PyTorch Lightning trainer and running the ``predict sequence``. The entrypoint script has several command line arguments that can be used to configure inference:

+---------------------+----------+---------------------------------------------------------------------------------+
| Parameter | Required | Description |
| Parameter | Required | Description |
+=====================+==========+=================================================================================+
| config | True | Path to the model config file. |
+---------------------+----------+---------------------------------------------------------------------------------+
Expand All @@ -37,20 +37,20 @@ OpenVINO Inference
==============
To run OpenVINO inference, first make sure that your model has been exported to the OpenVINO IR format. Once the model has been exported, OpenVINO inference can be triggered by running the OpenVINO entrypoint script in ``tools/inference/openvino.py``. The command line arguments are very similar to PyTorch inference entrypoint script:

+-------------+----------+-------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+=============+==========+=====================================================================================+
| config | True | Path to the model config file. |
+-------------+----------+-------------------------------------------------------------------------------------+
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-------------+----------+-------------------------------------------------------------------------------------+
| image | True | Path to the image source. This can be a single image or a folder of images. |
+-------------+----------+-------------------------------------------------------------------------------------+
| save_data | False | Path to which the output images should be saved. Leave empty for live visualization.|
+-------------+----------+-------------------------------------------------------------------------------------+
| meta_data | True | Path to the JSON file containing the model's meta data (e.g. normalization |
| | | parameters and anomaly score threshold). |
+-------------+----------+-------------------------------------------------------------------------------------+
+-----------+----------+--------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+===========+==========+======================================================================================+
| config | True | Path to the model config file. |
+-----------+----------+--------------------------------------------------------------------------------------+
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-----------+----------+--------------------------------------------------------------------------------------+
| image | True | Path to the image source. This can be a single image or a folder of images. |
+-----------+----------+--------------------------------------------------------------------------------------+
| save_data | False | Path to which the output images should be saved. Leave empty for live visualization. |
+-----------+----------+--------------------------------------------------------------------------------------+
| meta_data | True | Path to the JSON file containing the model's meta data (e.g. normalization |
| | | parameters and anomaly score threshold). |
+-----------+----------+--------------------------------------------------------------------------------------+

For correct inference results, the ``meta_data`` argument should be specified and point to the ``meta_data.json`` file that was generated when exporting the OpenVINO IR model. The file is stored in the same folder as the ``.xml`` and ``.bin`` files of the model.

Expand All @@ -59,3 +59,44 @@ As an example, OpenVINO inference can be triggered by the following command:
``python tools/inference/openvino.py --config padim.yaml --weights results/openvino/model.xml --input image.png --meta_data results/openvino/meta_data.json``

Similar to PyTorch inference, the visualization results will be displayed on the screen, and optionally saved to the file system location specified by the ``save_data`` parameter.



Gradio Inference
==============

The gradio inference is supported for both PyTorch and OpenVINO models.

+-----------+----------+------------------------------------------------------------------+
| Parameter | Required | Description |
+===========+==========+==================================================================+
| config | True | Path to the model config file. |
+-----------+----------+------------------------------------------------------------------+
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-----------+----------+------------------------------------------------------------------+
| meta_data | False | Path to the JSON file containing the model's meta data. |
| | | This is needed only for OpenVINO model. |
+-----------+----------+------------------------------------------------------------------+
| threshold | False | Threshold value used for identifying anomalies. Range 1-100. |
+-----------+----------+------------------------------------------------------------------+
| share | False | Share Gradio `share_url` |
+-----------+----------+------------------------------------------------------------------+

To use gradio with OpenVINO model, first make sure that your model has been exported to the OpenVINO IR format and ensure that the `meta_data` argument points to the ``meta_data.json`` file that was generated when exporting the OpenVINO IR model. The file is stored in the same folder as the ``.xml`` and ``.bin`` files of the model.

As an example, PyTorch model can be used by the following command:

.. code-block:: bash
python tools/inference/gradio_inference.py \
--config ./anomalib/models/padim/config.yaml \
--weights ./results/padim/mvtec/bottle/weights/model.ckpt
Similarly, you can use OpenVINO model by the following command:

.. code-block:: bash
python python tools/inference/gradio_inference.py \
--config ./anomalib/models/padim/config.yaml \
--weights ./results/padim/mvtec/bottle/openvino/openvino_model.onnx \
--meta_data ./results/padim/mvtec/bottle/openvino/meta_data.json
2 changes: 1 addition & 1 deletion tools/inference/gradio_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optiona

elif extension in (".onnx", ".bin", ".xml"):
openvino_inferencer = getattr(module, "OpenVINOInferencer")
inferencer = openvino_inferencer(config=config_path, path=weight_path, meta_data_path=meta_data_path)
inferencer = openvino_inferencer(config=config, path=weight_path, meta_data_path=meta_data_path)

else:
raise ValueError(
Expand Down

0 comments on commit b094410

Please sign in to comment.