From 21630b5b68c24374e4bf518719cd9db15c158466 Mon Sep 17 00:00:00 2001 From: Dick Ameln Date: Mon, 29 Aug 2022 14:44:57 +0200 Subject: [PATCH] update export documentation --- docs/source/guides/export.rst | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/docs/source/guides/export.rst b/docs/source/guides/export.rst index c699dd0b07..04c4d38a1d 100644 --- a/docs/source/guides/export.rst +++ b/docs/source/guides/export.rst @@ -1,20 +1,27 @@ Export & Optimization -------------- -This page will explain how to export your trained models to OpenVINO format, and how the performance of the exported OpenVINO models can be optimized. For an explanation how the exported models can be deployed, please refer to the inference guide: :ref:`_inference_documentation`. +This page will explain how to export your trained models to ONNX and OpenVINO format, and how the performance of the exported OpenVINO models can be optimized. For an explanation how the exported models can be deployed, please refer to the inference guide: :ref:`_inference_documentation`. Export ======= -Anomalib models are fully compatible with the OpenVINO framework for accelerating inference on intel hardware. To export a model to OpenVINO format, simply set openvino optimization to ``true`` in the model config as shown below, and trigger a training run. When the training finishes, the trained model weights will be converted to OpenVINO Intermediate Representation (IR) format, and written to the file system in the chosen results folder. +Anomalib models are fully compatible with the OpenVINO framework for accelerating inference on intel hardware. To export a model to OpenVINO format, simply set the export mode to ``openvino`` in the model config as shown below, and trigger a training run. When the training finishes, the trained model weights will be converted to OpenVINO Intermediate Representation (IR) format, and written to the file system in the chosen results folder. Since the OpenVINO model optimizer uses the ONNX format in one of the conversion steps, the ONNX model will be written to the file system as well. .. code-block:: none :caption: Add this configuration to your config.yaml file to export your model to OpenVINO IR after training. optimization: - openvino: - apply: true + export_mode: openvino As a prerequisite, make sure that all required packages listed in ``requirements/openvino.txt`` are installed in your environment. +It is also possible to only write the ONNX model to the filesystem. This is done by setting the ``export_mode`` parameter to ``onnx``: + +.. code-block:: none + :caption: Add this configuration to your config.yaml file to export your model to ONNX format after training. + + optimization: + export_mode: onnx + Optimization ============= Anomalib supports OpenVINO's Neural Network Compression Framework (NNCF) to further improve the performance of the exported OpenVINO models. NNCF optimizes the neural network components of the anomaly models during the training process, and can therefore achieve a better performance-accuracy trade-off than post-training approaches.