Skip to content

Commit

Permalink
Deploying to gh-pages from @ dc08344 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
udaij12 committed Jul 3, 2024
1 parent b824ea3 commit fae01ae
Show file tree
Hide file tree
Showing 14 changed files with 114 additions and 46 deletions.
13 changes: 9 additions & 4 deletions README.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>TorchServe &mdash; PyTorch/Serve master documentation</title>
<title>❗ANNOUNCEMENT: Security Changes❗ &mdash; PyTorch/Serve master documentation</title>



Expand Down Expand Up @@ -381,7 +381,7 @@
</li>


<li>TorchServe</li>
<li>❗ANNOUNCEMENT: Security Changes❗</li>


<li class="pytorch-breadcrumbs-aside">
Expand Down Expand Up @@ -417,7 +417,11 @@
<div role="main" class="main-content" itemscope="itemscope" itemtype="http://schema.org/Article">
<article itemprop="articleBody" id="pytorch-article" class="pytorch-article">

<section id="torchserve">
<section id="announcement-security-changes">
<h1>❗ANNOUNCEMENT: Security Changes❗<a class="headerlink" href="#announcement-security-changes" title="Permalink to this heading"></a></h1>
<p>TorchServe now enforces token authorization enabled and model API control disabled by default. These security features are intended to address the concern of unauthorized API calls and to prevent potential malicious code from being introduced to the model server. Refer the following documentation for more information: <a class="reference external" href="https://github.com/pytorch/serve/blob/master/docs/token_authorization_api">Token Authorization</a>, <a class="reference external" href="https://github.com/pytorch/serve/blob/master/docs/model_api_control">Model API control</a></p>
</section>
<section id="torchserve">
<h1>TorchServe<a class="headerlink" href="#torchserve" title="Permalink to this heading"></a></h1>
<p>TorchServe is a performant, flexible and easy to use tool for serving PyTorch eager mode and torchscripted models.</p>
<section id="basic-features">
Expand Down Expand Up @@ -520,7 +524,8 @@ <h2>Advanced Features<a class="headerlink" href="#advanced-features" title="Perm
<div class="pytorch-right-menu" id="pytorch-right-menu">
<div class="pytorch-side-scroll" id="pytorch-side-scroll-right">
<ul>
<li><a class="reference internal" href="#">TorchServe</a><ul>
<li><a class="reference internal" href="#">❗ANNOUNCEMENT: Security Changes❗</a></li>
<li><a class="reference internal" href="#torchserve">TorchServe</a><ul>
<li><a class="reference internal" href="#basic-features">Basic Features</a></li>
<li><a class="reference internal" href="#default-handlers">Default Handlers</a></li>
<li><a class="reference internal" href="#examples">Examples</a></li>
Expand Down
3 changes: 3 additions & 0 deletions _sources/README.md.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# ❗ANNOUNCEMENT: Security Changes❗
TorchServe now enforces token authorization enabled and model API control disabled by default. These security features are intended to address the concern of unauthorized API calls and to prevent potential malicious code from being introduced to the model server. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)

# TorchServe

TorchServe is a performant, flexible and easy to use tool for serving PyTorch eager mode and torchscripted models.
Expand Down
2 changes: 2 additions & 0 deletions _sources/inference_api.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

Inference API is listening on port 8080 and only accessible from localhost by default. To change the default setting, see [TorchServe Configuration](configuration.md).

For all Inference API requests, TorchServe requires the correct Inference token to be included or token authorization must be disable. For more details see [token authorization documentation](./token_authorization_api.md)

The TorchServe server supports the following APIs:

* [API Description](#api-description) - Gets a list of available APIs and options
Expand Down
27 changes: 27 additions & 0 deletions _sources/management_api.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,14 @@ TorchServe provides the following APIs that allows you to manage models at runti
4. [Unregister a model](#unregister-a-model)
5. [List registered models](#list-models)
6. [Set default version of a model](#set-default-version)
7. [Refresh tokens for token authorization](#token-authorization-api)

The Management API listens on port 8081 and is only accessible from localhost by default. To change the default setting, see [TorchServe Configuration](./configuration.md).

Management API for registering and deleting models is disabled by default. Add `--enable-model-api` to command line when running TorchServe to enable the use of these APIs. For more details and ways to enable see [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)

For all Management API requests, TorchServe requires the correct Management token to be included or token authorization must be disabled. For more details see [token authorization documentation](./token_authorization_api.md)

Similar to the [Inference API](inference_api.md), the Management API provides a [API description](#api-description) to describe management APIs with the OpenAPI 3.0 specification.

Alternatively, if you want to use KServe, TorchServe supports both v1 and v2 API. For more details please look into this [kserve documentation](https://github.com/pytorch/serve/tree/master/kubernetes/kserve)
Expand All @@ -19,6 +24,8 @@ Alternatively, if you want to use KServe, TorchServe supports both v1 and v2 API

This API follows the [ManagementAPIsService.RegisterModel](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/resources/proto/management.proto) gRPC API.

To use this API after TorchServe starts, model API control has to be enabled. Add `--enable-model-api` to command line when starting TorchServe to enable the use of this API. For more details see [model API control](./model_api_control.md)

`POST /models`

* `url` - Model archive download url. Supports the following locations:
Expand Down Expand Up @@ -441,6 +448,8 @@ print(customizedMetadata)

This API follows the [ManagementAPIsService.UnregisterModel](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/resources/proto/management.proto) gRPC API. It returns the status of a model in the ModelServer.

To use this API after TorchServe starts, model API control has to be enabled. Add `--enable-model-api` to command line when starting TorchServe to enable the use of this API. For more details see [model API control](./model_api_control.md)

`DELETE /models/{model_name}/{version}`

Use the Unregister Model API to free up system resources by unregistering specific version of a model from TorchServe:
Expand Down Expand Up @@ -522,3 +531,21 @@ curl -v -X PUT http://localhost:8081/models/noop/2.0/set-default
```

The out is OpenAPI 3.0.1 json format. You use it to generate client code, see [swagger codegen](https://swagger.io/swagger-codegen/) for detail.

## Token Authorization API

TorchServe now enforces token authorization by default. Check the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md).

This API is used in order to generate a new key to replace either the management or inference key.

Management Example:
```
curl localhost:8081/token?type=management -H "Authorization: Bearer {API Token}"
```
will replace the current management key in the key_file with a new one and will update the expiration time.

Inference example:
```
curl localhost:8081/token?type=inference -H "Authorization: Bearer {API Token}"
```
will replace the current inference key in the key_file with a new one and will update the expiration time.
Original file line number Diff line number Diff line change
@@ -1,28 +1,31 @@
# Model Control Mode
# Model API Control

TorchServe now supports model control mode with two settings "none"(default) and "enabled"
TorchServe now disables the use of model API (specifically registering and deleting models) by default. The use of these APIs can be enabled through command line or config.properties file.

## Two ways to set Model Control
1. Add `--model-api-enabled` to command line when running TorchServe to switch from none to enabled mode. Command line cannot be used to set mode to none, can only be used to set to enabled
2. Add `model_api_enabled=false` or `model_api_enabled=true` to config.properties file
* `model_api_enabled=false` is default and prevents users from registering or deleting models once TorchServe is running
* `model_api_enabled=true` is not default and allows users to register and delete models using the TorchServe model load APIs
TorchServe disables the ability to register and delete models using API calls by default once TorchServe is running. This is a security feature which addresses the concern of unintended registration and deletion of models once TorchServe has started. This is applicable in the scenario where a user may upload malicious code to the model server in the form of a model or where a user may delete a model that is being used. The default behavior prevents users from registering or deleting models once TorchServe is running. Model API control can be enabled to allow users to register and delete models using the TorchServe model load and delete APIs.

Priority between cmd and config file follows the following [TorchServer standard](https://github.com/pytorch/serve/blob/c74a29e8144bc12b84196775076b0e8cf3c5a6fc/docs/configuration.md#advanced-configuration)
## Three ways to set Model API Control
1. Environment variable: use `TS_ENABLE_MODEL_API` and set to `true` to enable and `false` to disable model API use. Note that `enable_envvars_config=true` must be set in config.properties to use environment variables configuration
2. Add `--enable-model-api` to command line when starting TorchServe to switch from disabled to enabled. Command line cannot be used to disable, can only be used to enable
3. Add `enable_model_api=false` or `enable_model_api=true` to config.properties file
* `enable_model_api=false` is default and prevents users from registering or deleting models once TorchServe is running
* `enable_model_api=true` is not default and allows users to register and delete models using the TorchServe model APIs

Priority follows the following [TorchServe standard](https://github.com/pytorch/serve/blob/c74a29e8144bc12b84196775076b0e8cf3c5a6fc/docs/configuration.md#advanced-configuration)
* Example 1:
* Config file: `model_api_enabled=false`
* Config file: `enable_model_api=false`

cmd line: `torchserve --start --ncs --model-store model_store --model-api-enabled`
cmd line: `torchserve --start --ncs --model-store model_store --enable-model-api`

Result: Model api mode enabled
* Example 2:
* Config file: `model_api_enabled=true`
* Config file: `enable_model_api=true`

cmd line: `torchserve --start --ncs --model-store model_store`

Result: Mode is enabled (no way to disable api mode through cmd)

## Model Control Mode Default
## Model API Control Default
At startup TorchServe loads only those models specified explicitly with the `--models` command-line option. After startup users will be unable to register or delete models in this mode.

### Example default
Expand All @@ -40,11 +43,11 @@ ubuntu@ip-172-31-11-32:~/serve$ curl -X POST "http://localhost:8081/models?url=
```

## Model Control API Enabled
Setting model control to `enabled` allows users to load and unload models using the model load APIs.
Setting model API to `enabled` allows users to load and unload models using the model load APIs.

### Example using cmd line to set mode to enabled
```
ubuntu@ip-172-31-11-32:~/serve$ torchserve --start --ncs --model-store model_store --models resnet-18=resnet-18.mar --ts-config config.properties --model-api-enabled
ubuntu@ip-172-31-11-32:~/serve$ torchserve --start --ncs --model-store model_store --models resnet-18=resnet-18.mar --ts-config config.properties --enable-model-api

ubuntu@ip-172-31-11-32:~/serve$ curl -X POST "http://localhost:8081/models?url=https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar"
{
Expand Down
5 changes: 3 additions & 2 deletions _sources/token_authorization_api.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,19 @@

TorchServe now enforces token authorization by default

TorchServe enforces token authorization by default which requires the correct token to be provided when calling an API. This is a security feature which addresses the concern of unauthorized API calls. This is applicable in the scenario where an unauthorized user may try to access a running TorchServe instance. The default behavior is to enable this feature which creates a key file with the appropriate tokens to be used for API calls. Users can disable this feature to prevent token authorization from being required for API calls ([how to disable](#how-to-set-and-disable-token-authorization)), however users are warned that this will open up TorchServe to potential unauthorized API calls.

## How to set and disable Token Authorization
* Global environment variable: use `TS_DISABLE_TOKEN_AUTHORIZATION` and set to `true` to disable and `false` to enable token authorization. Note that `enable_envvars_config=true` must be set in config.properties for global environment variables to be used
* Command line: Command line can only be used to disable token authorization by adding the `--disable-token` flag.
* Command line: Command line can only be used to disable token authorization by adding the `--disable-token-auth` flag.
* Config properties file: use `disable_token_authorization` and set to `true` to disable and `false` to enable token authorization.

Priority between env variables, cmd, and config file follows the following [TorchServer standard](https://github.com/pytorch/serve/blob/master/docs/configuration.md)

* Example 1:
* Config file: `disable_token_authorization=false`

cmd line: `torchserve --start --ncs --model-store model_store --disable-token`
cmd line: `torchserve --start --ncs --model-store model_store --disable-token-auth`

Result: Token authorization disabled through command line but enabled through config file, resulting in token authorization being disabled. Command line takes precedence
* Example 2:
Expand Down
1 change: 1 addition & 0 deletions apis.html
Original file line number Diff line number Diff line change
Expand Up @@ -460,6 +460,7 @@
<li class="toctree-l2"><a class="reference internal" href="management_api.html#list-models">List models</a></li>
<li class="toctree-l2"><a class="reference internal" href="management_api.html#api-description">API Description</a></li>
<li class="toctree-l2"><a class="reference internal" href="management_api.html#set-default-version">Set Default Version</a></li>
<li class="toctree-l2"><a class="reference internal" href="management_api.html#token-authorization-api">Token Authorization API</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="metrics_api.html">Metrics API</a><ul>
Expand Down
1 change: 1 addition & 0 deletions contents.html
Original file line number Diff line number Diff line change
Expand Up @@ -486,6 +486,7 @@
<li class="toctree-l2"><a class="reference internal" href="management_api.html#list-models">List models</a></li>
<li class="toctree-l2"><a class="reference internal" href="management_api.html#api-description">API Description</a></li>
<li class="toctree-l2"><a class="reference internal" href="management_api.html#set-default-version">Set Default Version</a></li>
<li class="toctree-l2"><a class="reference internal" href="management_api.html#token-authorization-api">Token Authorization API</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="metrics_api.html">Metrics API</a><ul>
Expand Down
1 change: 1 addition & 0 deletions inference_api.html
Original file line number Diff line number Diff line change
Expand Up @@ -424,6 +424,7 @@
<section id="inference-api">
<h1><a class="reference external" href="#inference-api">Inference API</a><a class="headerlink" href="#inference-api" title="Permalink to this heading"></a></h1>
<p>Inference API is listening on port 8080 and only accessible from localhost by default. To change the default setting, see <a class="reference internal" href="configuration.html"><span class="doc">TorchServe Configuration</span></a>.</p>
<p>For all Inference API requests, TorchServe requires the correct Inference token to be included or token authorization must be disable. For more details see <a class="reference internal" href="token_authorization_api.html"><span class="doc">token authorization documentation</span></a></p>
<p>The TorchServe server supports the following APIs:</p>
<ul class="simple">
<li><p><a class="reference external" href="#api-description">API Description</a> - Gets a list of available APIs and options</p></li>
Expand Down
Loading

0 comments on commit fae01ae

Please sign in to comment.