Skip to content

Commit

Permalink
[DOCS] 2024.2 version update for 2024.2 (#24747)
Browse files Browse the repository at this point in the history
Port from #24746

Docs version update to 2024.2
  • Loading branch information
msmykx-intel committed May 29, 2024
1 parent 3af803b commit 00b6063
Show file tree
Hide file tree
Showing 18 changed files with 30 additions and 35 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ Step 2: Install OpenVINO Runtime Using the APT Package Manager
.. code-block:: sh
sudo apt install openvino-2024.0.0
sudo apt install openvino-2024.2.0
.. note::

Expand Down Expand Up @@ -228,7 +228,7 @@ To uninstall OpenVINO Runtime via APT, run the following command based on your n

.. code-block:: sh
sudo apt autoremove openvino-2024.0.0
sudo apt autoremove openvino-2024.2.0
What's Next?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Installing OpenVINO Runtime with Conan Package Manager
.. code-block:: sh
[requires]
openvino/2024.1.0
openvino/2024.2.0
[generators]
CMakeDeps
CMakeToolchain
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Installing OpenVINO Runtime with Anaconda Package Manager

.. code-block:: sh
conda install -c conda-forge openvino=2024.1.0
conda install -c conda-forge openvino=2024.2.0
Congratulations! You've just Installed OpenVINO! For some use cases you may still
need to install additional components. Check the description below, as well as the
Expand Down Expand Up @@ -115,7 +115,7 @@ with the proper OpenVINO version number:

.. code-block:: sh
conda remove openvino=2024.1.0
conda remove openvino=2024.2.0
What's Next?
############################################################
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Install OpenVINO Runtime
.. code-block:: sh
sudo yum install openvino-2024.0.0
sudo yum install openvino-2024.2.0
Expand Down Expand Up @@ -199,7 +199,7 @@ To uninstall OpenVINO Runtime via YUM, run the following command based on your n
.. code-block:: sh
sudo yum autoremove openvino-2024.0.0
sudo yum autoremove openvino-2024.2.0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ To uninstall OpenVINO Runtime via ZYPPER, run the following command based on you

.. code-block:: sh
sudo zypper remove *openvino-2024.0.0*
sudo zypper remove *openvino-2024.2.0*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Bert Benchmark Python Sample


This sample demonstrates how to estimate performance of a Bert model using Asynchronous
Inference Request API. Unlike `demos <https://docs.openvino.ai/nightly/omz_demos.html>`__ this sample does not have
Inference Request API. Unlike `demos <https://docs.openvino.ai/2024/omz_demos.html>`__ this sample does not have
configurable command line arguments. Feel free to modify sample's source code to
try out different options.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ You need a model that is specific for your inference task. You can get it from o
Convert the Model
--------------------

If Your model requires conversion, check the `article <https://docs.openvino.ai/2023.3/openvino_docs_../../get-started_../../get-started_demos.html>`__ for information how to do it.
If Your model requires conversion, check the `article <https://docs.openvino.ai/2024/learn-openvino/openvino-samples/get-started-demos.html>`__ for information how to do it.

.. _download-media:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,6 @@ Additional Resources
- :doc:`Get Started with Samples <get-started-demos>`
- :doc:`Using OpenVINO Samples <../openvino-samples>`
- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>`
- `API Reference <https://docs.openvino.ai/2023.2/api/api_reference.html>`__
- `API Reference <https://docs.openvino.ai/2024/api/api_reference.html>`__
- `Hello NV12 Input Classification C++ Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/hello_nv12_input_classification/README.md>`__
- `Hello NV12 Input Classification C Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/c/hello_nv12_input_classification/README.md>`__
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Sync Benchmark Sample
This sample demonstrates how to estimate performance of a model using Synchronous
Inference Request API. It makes sense to use synchronous inference only in latency
oriented scenarios. Models with static input shapes are supported. Unlike
`demos <https://docs.openvino.ai/nightly/omz_demos.html>`__ this sample does not have other configurable command-line
`demos <https://docs.openvino.ai/2024/omz_demos.html>`__ this sample does not have other configurable command-line
arguments. Feel free to modify sample's source code to try out different options.
Before using the sample, refer to the following requirements:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Throughput Benchmark Sample


This sample demonstrates how to estimate performance of a model using Asynchronous
Inference Request API in throughput mode. Unlike `demos <https://docs.openvino.ai/nightly/omz_demos.html>`__ this sample
Inference Request API in throughput mode. Unlike `demos <https://docs.openvino.ai/2024/omz_demos.html>`__ this sample
does not have other configurable command-line arguments. Feel free to modify sample's
source code to try out different options.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Below are example-codes for the regular and async-based approaches to compare:


The technique can be generalized to any available parallel slack. For example, you can do inference and simultaneously encode the resulting or previous frames or run further inference, like emotion detection on top of the face detection results.
Refer to the `Object Detection C++ Demo <https://docs.openvino.ai/2023.3/omz_demos_object_detection_demo_cpp.html>`__ , `Object Detection Python Demo <https://docs.openvino.ai/2023.3/omz_demos_object_detection_demo_python.html>`__ (latency-oriented Async API showcase) and :doc:`Benchmark App Sample <../../../learn-openvino/openvino-samples/benchmark-tool>` for complete examples of the Async API in action.
Refer to the `Object Detection C++ Demo <https://docs.openvino.ai/2024/omz_demos_object_detection_demo_cpp.html>`__ , `Object Detection Python Demo <https://docs.openvino.ai/2024/omz_demos_object_detection_demo_python.html>`__ (latency-oriented Async API showcase) and :doc:`Benchmark App Sample <../../../learn-openvino/openvino-samples/benchmark-tool>` for complete examples of the Async API in action.

.. note::

Expand Down
6 changes: 3 additions & 3 deletions docs/dev/pypi_publish/pypi-openvino-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,10 +116,10 @@ For example, to install and configure the components for working with TensorFlow
**In addition, the openvino-dev package installs the following components by default:**

| Component | Console Script | Description |
| Component | Console Script | Description |
|------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Legacy Model conversion API](https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components. <br>Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | |
| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/nightly/omz_tools_downloader.html)| `omz_downloader` <br> `omz_converter` <br> `omz_quantizer` <br> `omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:<br> **Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API. <br> **Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool. <br> **Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. |
| [Legacy Model conversion API](https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components. <br>Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | |
| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/2024/omz_tools_downloader.html)| `omz_downloader` <br> `omz_converter` <br> `omz_quantizer` <br> `omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:<br> **Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API. <br> **Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool. <br> **Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. |

## Troubleshooting

Expand Down
2 changes: 1 addition & 1 deletion docs/home.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
============================
OpenVINO 2024.1
OpenVINO 2024.2
============================

.. meta::
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_setup/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
author = 'Intel®'

language = 'en'
version_name = 'nightly'
version_name = '2024'

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand Down
10 changes: 4 additions & 6 deletions samples/cpp/benchmark/sync_benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,12 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| -------------------------------| -------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [alexnet](https://docs.openvino.ai/nightly/omz_models_model_alexnet.html), |
| | [googlenet-v1](https://docs.openvino.ai/nightly/omz_models_model_googlenet_v1.html), |
| | [yolo-v3-tf](https://docs.openvino.ai/nightly/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-0200](https://docs.openvino.ai/nightly/omz_models_model_face_detection_0200.html) |
| Validated Models | [yolo-v3-tf](https://docs.openvino.ai/2024/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-0200](https://docs.openvino.ai/2024/omz_models_model_face_detection_0200.html) |
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) |

The following C++ API is used in the application:

Expand Down
10 changes: 4 additions & 6 deletions samples/cpp/benchmark/throughput_benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,12 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| ----------------------------| -------------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [alexnet](https://docs.openvino.ai/nightly/omz_models_model_alexnet.html), |
| | [googlenet-v1](https://docs.openvino.ai/nightly/omz_models_model_googlenet_v1.html), |
| | [yolo-v3-tf](https://docs.openvino.ai/nightly/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-](https://docs.openvino.ai/nightly/omz_models_model_face_detection_0200.html) |
| Validated Models | [yolo-v3-tf](https://docs.openvino.ai/2024/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-](https://docs.openvino.ai/2024/omz_models_model_face_detection_0200.html) |
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/throughput-benchmark.html) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/throughput-benchmark.html) |

The following C++ API is used in the application:

Expand Down
6 changes: 3 additions & 3 deletions samples/cpp/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| ----------------------------| -----------------------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [person-detection-retail-0013](https://docs.openvino.ai/nightly/omz_models_model_person_detection_retail_0013.html) |
| Validated Models | [person-detection-retail-0013](https://docs.openvino.ai/2024/omz_models_model_person_detection_retail_0013.html) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) |

The following C++ API is used in the application:

Expand Down
1 change: 0 additions & 1 deletion src/frontends/tensorflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ flowchart BT
```

The MO tool and model conversion API now use the TensorFlow Frontend as the default path for conversion to IR.
Known limitations of TF FE are described [here](https://docs.openvino.ai/nightly/openvino_docs_MO_DG_TensorFlow_Frontend.html).

## Key contacts

Expand Down

0 comments on commit 00b6063

Please sign in to comment.