Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Serving]add an serving example of tts #384

Merged
merged 7 commits into from
Oct 19, 2022
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions examples/audio/pp-tts/serving/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
([简体中文](./README_cn.md)|English)

# PP-TTS Streaming Text-to-Speech Serving

## Introduction
This demo is an implementation of starting the streaming speech synthesis service and accessing the service.

`Server` must be started in the docker, while `Client` does not have to be in the docker.

**The streaming_pp_tts under the path of this article ($PWD) contains the configuration and code of the model, which needs to be mapped to the docker for use.**

## Usage
### 1. Server
#### 1.1 Docker

```bash
docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker exec -it -u root fastdeploy bash
```

#### 1.2 Installation (inside the docker)
```bash
apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip
pip3 install paddlespeech
export LC_ALL="zh_CN.UTF-8"
export LANG="zh_CN.UTF-8"
export LANGUAGE="zh_CN:zh:en_US:en"
```

#### 1.3 Download models (inside the docker)
```bash
cd /models/streaming_pp_tts/1
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip
unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
unzip mb_melgan_csmsc_onnx_0.2.0.zip
```
**For the convenience of users, we recommend that you use the command `docker -v` to map $PWD (streaming_pp_tts and the configuration and code of the model contained therein) to the docker path `/models`. You can also use other methods, but regardless of which method you use, the final model directory and structure in the docker are shown in the following figure.**

```
/models
└───streaming_pp_tts #Directory of the entire service model
│ config.pbtxt #Configuration file of service model
│ stream_client.py #Code of Client
└───1 #Model version number
│ model.py #Code to start the model
└───fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0 #Model file required by code
└───mb_melgan_csmsc_onnx_0.2.0 #Model file required by code

```

#### 1.4 Start the server (inside the docker)

```bash
fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_pp_tts
```
Arguments:
- `model-repository`(required): Path of model storage.
- `model-control-mode`(required): The mode of loading the model. At present, you can use 'explicit'.
- `load-model`(required): Name of the model to be loaded.
- `http-port`(optional): Port for http service. Default: `8000`. This is not used in our example.
- `grpc-port`(optional): Port for grpc service. Default: `8001`.
- `metrics-port`(optional): Port for metrics service. Default: `8002`. This is not used in our example.

### 2. Client
#### 2.1 Installation
```bash
pip3 install tritonclient[all]
```

#### 2.2 Send request
```bash
python3 /models/streaming_pp_tts/stream_client.py
```
76 changes: 76 additions & 0 deletions examples/audio/pp-tts/serving/README_cn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
(简体中文|[English](./README.md))

# PP-TTS流式语音合成服务化部署

## 介绍
本文介绍了使用FastDeploy搭建流式语音合成服务的方法。

服务端必须在docker内启动,而客户端不是必须在docker容器内.

**本文所在路径($PWD)下的streaming_pp_tts里包含模型的配置和代码(服务端会加载模型和代码以启动服务),需要将其映射到docker中使用。**

## 使用
### 1. 服务端
#### 1.1 Docker
```bash
docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker exec -it -u root fastdeploy bash
```

#### 1.2 安装(在docker内)
```bash
apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip
pip3 install paddlespeech
export LC_ALL="zh_CN.UTF-8"
export LANG="zh_CN.UTF-8"
export LANGUAGE="zh_CN:zh:en_US:en"
```

#### 1.3 下载模型(在docker内)
```bash
cd /models/streaming_pp_tts/1
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip
unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
unzip mb_melgan_csmsc_onnx_0.2.0.zip
```
**为了方便用户使用,我们推荐用户使用1.1中的`docker -v`命令将$PWD(streaming_pp_tts及里面包含的模型的配置和代码)映射到了docker内的`/models`路径,用户也可以使用其他办法,但无论使用哪种方法,最终在docker内的模型目录及结构如下图所示。**

```
/models
└───streaming_pp_tts #整个服务模型文件夹
│ config.pbtxt #服务模型配置文件
│ stream_client.py #客户端代码
└───1 #模型版本号,此处为1
│ model.py #模型启动代码
└───fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0 #启动代码所需的模型文件
└───mb_melgan_csmsc_onnx_0.2.0 #启动代码所需的模型文件

```

#### 1.4 启动服务端(在docker内)
```bash
fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_pp_tts
```

参数:
- `model-repository`(required): 整套模型streaming_pp_tts存放的路径.
- `model-control-mode`(required): 模型加载的方式,现阶段, 使用'explicit'即可.
- `load-model`(required): 需要加载的模型的名称.
- `http-port`(optional): HTTP服务的端口号. 默认: `8000`. 本示例中未使用该端口.
- `grpc-port`(optional): GRPC服务的端口号. 默认: `8001`.
- `metrics-port`(optional): 服务端指标的端口号. 默认: `8002`. 本示例中未使用该端口.

### 2. 客户端
#### 2.1 安装
```bash
pip3 install tritonclient[all]
```

#### 2.2 发送请求
```bash
python3 /models/streaming_pp_tts/stream_client.py
```
Loading