Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security documentation update #3183

Merged
merged 41 commits into from
Jul 3, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
9af48d0
adding security update
udaij12 Jun 7, 2024
c20abda
adding security update
udaij12 Jun 7, 2024
33afd93
doc updates
udaij12 Jun 13, 2024
1670681
Merge branch 'master' into doc_update
udaij12 Jun 13, 2024
91af975
changes to model mode
udaij12 Jun 13, 2024
4565b13
Merge branch 'master' into doc_update
udaij12 Jun 14, 2024
395990c
adding more update
udaij12 Jun 14, 2024
43b25b3
adding token auth paragraph
udaij12 Jun 14, 2024
12ab826
doc changes and logging addition
udaij12 Jun 25, 2024
85590c9
doc changes and logging addition
udaij12 Jun 25, 2024
e2859bb
fix formatting
udaij12 Jun 25, 2024
938f21a
Merge branch 'master' into doc_update
mreso Jun 26, 2024
649f4dd
changing flag name and adding env for model api
udaij12 Jun 26, 2024
e97dff9
Merge branch 'doc_update' of https://github.com/pytorch/serve into do…
udaij12 Jun 26, 2024
e432115
flag fixes
udaij12 Jun 26, 2024
e5940be
fixing doc
udaij12 Jun 26, 2024
8715a0f
changing misaligned name
udaij12 Jun 26, 2024
67c6df6
change to variable name
udaij12 Jun 26, 2024
d966144
change name
udaij12 Jun 26, 2024
bf2f9b9
changing config name
udaij12 Jun 27, 2024
fe8b7fe
Merge branch 'master' into doc_update
udaij12 Jun 27, 2024
573f646
Merge branch 'master' into doc_update
udaij12 Jun 27, 2024
bdcdaae
spellcheck test
udaij12 Jun 27, 2024
c2f29a1
testing docker change
udaij12 Jun 28, 2024
e5af37c
test
udaij12 Jun 28, 2024
1c055ab
Merge branch 'master' into doc_update
udaij12 Jun 28, 2024
57ea245
fixing test_util
udaij12 Jun 28, 2024
7b9ffbd
changes to llm
udaij12 Jun 28, 2024
f49a12b
adding model api flag
udaij12 Jun 28, 2024
5355a9f
fixes to llm update
udaij12 Jul 2, 2024
dbfc91e
Merge branch 'master' into doc_update
udaij12 Jul 2, 2024
2762be2
launcher fix
udaij12 Jul 2, 2024
62078c5
testing token
udaij12 Jul 2, 2024
7dbce37
testing token
udaij12 Jul 2, 2024
b445f8f
fixing docker
udaij12 Jul 2, 2024
e685330
change branch name'
udaij12 Jul 2, 2024
37dfcfc
change branch name'
udaij12 Jul 2, 2024
595e5e8
final changes
udaij12 Jul 2, 2024
5a0a12c
Doc changes
udaij12 Jul 2, 2024
305fb2e
adding key name
udaij12 Jul 2, 2024
9560adc
Merge branch 'doc_update' of https://github.com/pytorch/serve into do…
udaij12 Jul 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions benchmarks/utils/system_under_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def start(self):
click.secho("*Starting local Torchserve instance...", fg="green")

ts_cmd = (
f"torchserve --start --model-store {self.execution_params['tmp_dir']}/model_store --enabled_model_api --disable-token-auth "
f"torchserve --start --model-store {self.execution_params['tmp_dir']}/model_store --enable-model-api --disable-token-auth "
f"--workflow-store {self.execution_params['tmp_dir']}/wf_store "
f"--ts-config {self.execution_params['tmp_dir']}/benchmark/conf/{self.execution_params['config_properties_name']} "
f" > {self.execution_params['tmp_dir']}/benchmark/logs/model_metrics.log"
Expand Down Expand Up @@ -195,7 +195,7 @@ def start(self):
f"docker run {self.execution_params['docker_runtime']} {backend_profiling} --name ts --user root -p "
f"127.0.0.1:{inference_port}:{inference_port} -p 127.0.0.1:{management_port}:{management_port} "
f"-v {self.execution_params['tmp_dir']}:/tmp {enable_gpu} -itd {docker_image} "
f'"torchserve --start --model-store /home/model-server/model-store --enabled_model_api --disable-token-auth '
f'"torchserve --start --model-store /home/model-server/model-store --enable-model-api --disable-token-auth '
f"\--workflow-store /home/model-server/wf-store "
f"--ts-config /tmp/benchmark/conf/{self.execution_params['config_properties_name']} > "
f'/tmp/benchmark/logs/model_metrics.log"'
Expand Down
6 changes: 3 additions & 3 deletions docs/management_api.md
mreso marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ TorchServe provides the following APIs that allows you to manage models at runti

The Management API listens on port 8081 and is only accessible from localhost by default. To change the default setting, see [TorchServe Configuration](./configuration.md).

Management API for registering and deleting models is disabled by default. Add `--enabled_model_api` to command line when running TorchServe to enable the use of these APIs. For more details and ways to enable see [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)
Management API for registering and deleting models is disabled by default. Add `--enable-model-api` to command line when running TorchServe to enable the use of these APIs. For more details and ways to enable see [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)

For all Management API requests, TorchServe requires the correct Management token to be included or token authorization must be disabled. For more details see [token authorization documentation](./token_authorization_api.md)

Expand All @@ -24,7 +24,7 @@ Alternatively, if you want to use KServe, TorchServe supports both v1 and v2 API

This API follows the [ManagementAPIsService.RegisterModel](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/resources/proto/management.proto) gRPC API.

To use this API after TorchServe starts, model API control has to be enabled. Add `--enabled_model_api` to command line when running TorchServe to enable the use of this API. For more details see [model API control](./model_api_control.md)
To use this API after TorchServe starts, model API control has to be enabled. Add `--enable-model-api` to command line when running TorchServe to enable the use of this API. For more details see [model API control](./model_api_control.md)
udaij12 marked this conversation as resolved.
Show resolved Hide resolved

`POST /models`

Expand Down Expand Up @@ -448,7 +448,7 @@ print(customizedMetadata)

This API follows the [ManagementAPIsService.UnregisterModel](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/resources/proto/management.proto) gRPC API. It returns the status of a model in the ModelServer.

To use this API after TorchServe starts, model API control has to be enabled. Add `--enabled_model_api` to command line when running TorchServe to enable the use of this API. For more details see [model API control](./model_api_control.md)
To use this API after TorchServe starts, model API control has to be enabled. Add `--enable-model-api` to command line when running TorchServe to enable the use of this API. For more details see [model API control](./model_api_control.md)
udaij12 marked this conversation as resolved.
Show resolved Hide resolved

`DELETE /models/{model_name}/{version}`

Expand Down
16 changes: 8 additions & 8 deletions docs/model_api_control.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,20 @@ TorchServe disables the ability to register and delete models using API calls on

## Two ways to set Model Control
udaij12 marked this conversation as resolved.
Show resolved Hide resolved
1. Global environment variable: use `TS_ENABLE_MODEL_API` and set to `true` to enable and `false` to disable model API use. Note that `enable_envvars_config=true` must be set in config.properties for global environment variables to be used
udaij12 marked this conversation as resolved.
Show resolved Hide resolved
2. Add `--enabled_model_api` to command line when running TorchServe to switch from disabled to enabled. Command line cannot be used to disabled, can only be used to enabled
3. Add `enabled_model_api=false` or `enabled_model_api=true` to config.properties file
* `enabled_model_api=false` is default and prevents users from registering or deleting models once TorchServe is running
* `enabled_model_api=true` is not default and allows users to register and delete models using the TorchServe model load APIs
2. Add `--enable-model-api` to command line when running TorchServe to switch from disabled to enabled. Command line cannot be used to disabled, can only be used to enabled
udaij12 marked this conversation as resolved.
Show resolved Hide resolved
3. Add `enable-model-api=false` or `enable-model-api=true` to config.properties file
* `enable-model-api=false` is default and prevents users from registering or deleting models once TorchServe is running
* `enable-model-api=true` is not default and allows users to register and delete models using the TorchServe model load APIs

Priority follows the following [TorchServer standard](https://github.com/pytorch/serve/blob/c74a29e8144bc12b84196775076b0e8cf3c5a6fc/docs/configuration.md#advanced-configuration)
udaij12 marked this conversation as resolved.
Show resolved Hide resolved
* Example 1:
* Config file: `enabled_model_api=false`
* Config file: `enable-model-api=false`

cmd line: `torchserve --start --ncs --model-store model_store --enabled_model_api`
cmd line: `torchserve --start --ncs --model-store model_store --enable-model-api`

Result: Model api mode enabled
* Example 2:
* Config file: `enabled_model_api=true`
* Config file: `enable-model-api=true`

cmd line: `torchserve --start --ncs --model-store model_store`

Expand Down Expand Up @@ -47,7 +47,7 @@ Setting model API to `enabled` allows users to load and unload models using the

### Example using cmd line to set mode to enabled
```
ubuntu@ip-172-31-11-32:~/serve$ torchserve --start --ncs --model-store model_store --models resnet-18=resnet-18.mar --ts-config config.properties --enabled_model_api
ubuntu@ip-172-31-11-32:~/serve$ torchserve --start --ncs --model-store model_store --models resnet-18=resnet-18.mar --ts-config config.properties --enable-model-api

ubuntu@ip-172-31-11-32:~/serve$ curl -X POST "http://localhost:8081/models?url=https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar"
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ public final class ConfigManager {
private static final String TS_HEADER_KEY_SEQUENCE_START = "ts_header_key_sequence_start";
private static final String TS_HEADER_KEY_SEQUENCE_END = "ts_header_key_sequence_end";
private static final String TS_DISABLE_TOKEN_AUTHORIZATION = "disable_token_authorization";
private static final String TS_ENABLE_MODEL_API = "enabled_model_api";
private static final String TS_ENABLE_MODEL_API = "enable-model-api";
udaij12 marked this conversation as resolved.
Show resolved Hide resolved

// Configuration which are not documented or enabled through environment variables
private static final String USE_NATIVE_IO = "use_native_io";
Expand Down
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/config.properties
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,4 @@ models={\
# enable_metrics_api=false
workflow_store=../archive/src/test/resources/workflows
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ enable_envvars_config=true
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
Original file line number Diff line number Diff line change
Expand Up @@ -53,4 +53,4 @@ models={\
}\
}
metrics_config=src/test/resources/metrics_default.yaml
enabled_model_api=true
enable-model-api=true
Original file line number Diff line number Diff line change
Expand Up @@ -53,4 +53,4 @@ models={\
}\
}
metrics_config=src/test/resources/metrics_default.yaml
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot1.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot2.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot3.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot4.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot5.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot6.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot7.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot8.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion frontend/server/src/test/resources/snapshots/snapshot9.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ metrics_address=https\://127.0.0.1\:8445
workflow_store=../archive/src/test/resources/workflows
metrics_config=src/test/resources/metrics_default.yaml
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion kubernetes/kserve/tests/configs/mnist_v1_cpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ spec:
memory: 256Mi
args:
- --disable-token-auth
- --enabled_model_api
- --enable-model-api
2 changes: 1 addition & 1 deletion kubernetes/kserve/tests/configs/mnist_v2_cpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ spec:
memory: 256Mi
args:
- --disable-token-auth
- --enabled_model_api
- --enable-model-api
2 changes: 1 addition & 1 deletion kubernetes/tests/scripts/test_mnist.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ function start_minikube_cluster() {

function build_docker_image() {
eval $(minikube docker-env)
echo "enabled_model_api=true" >> $ROOT_DIR/$EXAMPLE_DIR/../docker/config.properties
echo "enable-model-api=true" >> $ROOT_DIR/$EXAMPLE_DIR/../docker/config.properties
echo "disable_token_authorization=true" >> $ROOT_DIR/$EXAMPLE_DIR/../docker/config.properties
docker system prune -f
docker build -t $DOCKER_IMAGE --file $ROOT_DIR/$EXAMPLE_DIR/../docker/Dockerfile --build-arg EXAMPLE_DIR="${EXAMPLE_DIR}" .
Expand Down
4 changes: 2 additions & 2 deletions test/pytest/test_model_control_mode.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def setup_torchserve():
Path(test_utils.MODEL_STORE).mkdir(parents=True, exist_ok=True)

test_utils.start_torchserve(
no_config_snapshots=True, models="mnist=mnist.mar", model_api_enabled=False
no_config_snapshots=True, models="mnist=mnist.mar", enable_model_api=False
)

yield "test"
Expand Down Expand Up @@ -144,7 +144,7 @@ def test_priority_env(monkeypatch):
test_utils.start_torchserve(
snapshot_file=config_file_priority,
no_config_snapshots=True,
model_api_enabled=False,
enable_model_api=False,
)

params = (
Expand Down
4 changes: 2 additions & 2 deletions test/pytest/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def start_torchserve(
plugin_folder=None,
disable_token=True,
models=None,
model_api_enabled=True,
enable_model_api=True,
):
stop_torchserve()
crate_mar_file_table()
Expand All @@ -80,7 +80,7 @@ def start_torchserve(
cmd.extend(["--disable-token-auth"])
if models:
cmd.extend(["--models", models])
if model_api_enabled:
if enable_model_api:
cmd.extend(["--enable-model-api"])
print(cmd)

Expand Down
2 changes: 1 addition & 1 deletion test/resources/config.properties
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ private_key_file=resources/key.pem
certificate_file=resources/certs.pem
install_py_dep_per_model=true
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion test/resources/config_kf.properties
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ private_key_file=resources/key.pem
certificate_file=resources/certs.pem
service_envelope=kserve
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion test/resources/config_kfv2.properties
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ private_key_file=resources/key.pem
certificate_file=resources/certs.pem
service_envelope=kservev2
disable_token_authorization=true
enabled_model_api=true
enable-model-api=true
2 changes: 1 addition & 1 deletion test/resources/config_model_mode.properties
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
enabled_model_api=false
enable-model-api=false
enable_envvars_config=true
14 changes: 7 additions & 7 deletions ts_scripts/api_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ def trigger_management_tests():
"""Return exit code of newman execution of management collection"""
config_file = open("config.properties", "w")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true")
config_file.write("enable-model-api=true")
config_file.close()

ts.start_torchserve(
Expand All @@ -141,7 +141,7 @@ def trigger_inference_tests():
config_file = open("config.properties", "w")
config_file.write("metrics_mode=prometheus\n")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true")
config_file.write("enable-model-api=true")
config_file.close()

ts.start_torchserve(
Expand Down Expand Up @@ -209,7 +209,7 @@ def trigger_explanation_tests():
config_file = open("config.properties", "w")
config_file.write("metrics_mode=prometheus\n")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true")
config_file.write("enable-model-api=true")
config_file.close()

ts.start_torchserve(
Expand All @@ -236,7 +236,7 @@ def trigger_incr_timeout_inference_tests():
config_file.write("default_response_timeout=300\n")
config_file.write("metrics_mode=prometheus\n")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true")
config_file.write("enable-model-api=true")
config_file.close()

ts.start_torchserve(
Expand Down Expand Up @@ -279,7 +279,7 @@ def trigger_management_tests_kf():

config_file = open("config.properties", "w")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true\n")
config_file.write("enable-model-api=true\n")
config_file.write("service_envelope=kserve")
config_file.close()

Expand All @@ -306,7 +306,7 @@ def trigger_inference_tests_kf():
config_file.write("service_envelope=kserve\n")
config_file.write("metrics_mode=prometheus\n")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true\n")
config_file.write("enable-model-api=true\n")
config_file.close()

ts.start_torchserve(
Expand Down Expand Up @@ -349,7 +349,7 @@ def trigger_inference_tests_kfv2():
config_file.write("service_envelope=kservev2\n")
config_file.write("metrics_mode=prometheus\n")
config_file.write("disable_token_authorization=true\n")
config_file.write("enabled_model_api=true\n")
config_file.write("enable-model-api=true\n")
config_file.close()

ts.start_torchserve(
Expand Down
4 changes: 2 additions & 2 deletions ts_scripts/tsutils.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ def start_torchserve(
log_file="",
gen_mar=True,
disable_token=True,
model_api_enabled=True,
enable_model_api=True,
):
if gen_mar:
mg.gen_mar(model_store)
Expand All @@ -67,7 +67,7 @@ def start_torchserve(
cmd.append("--disable-token-auth")
if config_file:
cmd.append(f"--ts-config={config_file}")
if model_api_enabled:
if enable_model_api:
cmd.extend(["--enable-model-api"])
if log_file:
print(f"## Console logs redirected to file: {log_file}")
Expand Down
Loading