Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update export documentation #521

Merged
merged 1 commit into from
Aug 29, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 11 additions & 4 deletions docs/source/guides/export.rst
Original file line number Diff line number Diff line change
@@ -1,20 +1,27 @@
Export & Optimization
--------------
This page will explain how to export your trained models to OpenVINO format, and how the performance of the exported OpenVINO models can be optimized. For an explanation how the exported models can be deployed, please refer to the inference guide: :ref:`_inference_documentation`.
This page will explain how to export your trained models to ONNX and OpenVINO format, and how the performance of the exported OpenVINO models can be optimized. For an explanation how the exported models can be deployed, please refer to the inference guide: :ref:`_inference_documentation`.

Export
=======
Anomalib models are fully compatible with the OpenVINO framework for accelerating inference on intel hardware. To export a model to OpenVINO format, simply set openvino optimization to ``true`` in the model config as shown below, and trigger a training run. When the training finishes, the trained model weights will be converted to OpenVINO Intermediate Representation (IR) format, and written to the file system in the chosen results folder.
Anomalib models are fully compatible with the OpenVINO framework for accelerating inference on intel hardware. To export a model to OpenVINO format, simply set the export mode to ``openvino`` in the model config as shown below, and trigger a training run. When the training finishes, the trained model weights will be converted to OpenVINO Intermediate Representation (IR) format, and written to the file system in the chosen results folder. Since the OpenVINO model optimizer uses the ONNX format in one of the conversion steps, the ONNX model will be written to the file system as well.

.. code-block:: none
:caption: Add this configuration to your config.yaml file to export your model to OpenVINO IR after training.

optimization:
openvino:
apply: true
export_mode: openvino

As a prerequisite, make sure that all required packages listed in ``requirements/openvino.txt`` are installed in your environment.

It is also possible to only write the ONNX model to the filesystem. This is done by setting the ``export_mode`` parameter to ``onnx``:

.. code-block:: none
:caption: Add this configuration to your config.yaml file to export your model to ONNX format after training.

optimization:
export_mode: onnx

Optimization
=============
Anomalib supports OpenVINO's Neural Network Compression Framework (NNCF) to further improve the performance of the exported OpenVINO models. NNCF optimizes the neural network components of the anomaly models during the training process, and can therefore achieve a better performance-accuracy trade-off than post-training approaches.
Expand Down