Skip to content

Commit

Permalink
updated openvino prediction pipeline
Browse files Browse the repository at this point in the history
  • Loading branch information
dtrawins committed Feb 5, 2019
1 parent 20df587 commit fde3aa7
Show file tree
Hide file tree
Showing 21 changed files with 569 additions and 577 deletions.
Binary file removed examples/models/openvino_imagenet_ensemble/car.png
Binary file not shown.
3 changes: 3 additions & 0 deletions examples/models/openvino_imagenet_ensemble/input_images.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
dog.jpeg 248
zebra.jpeg 340
pelican.jpeg 144

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 0 additions & 43 deletions examples/models/openvino_imagenet_ensemble/pvc.json

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import logging
import numpy as np
logger = logging.getLogger(__name__)

class ImageNetCombiner(object):

def aggregate(self, Xs, features_names):
print("ImageNet Combiner aggregate called")
logger.info(Xs)
return (Xs[0]+Xs[1])/2.0
logger.debug(Xs)
return (np.reshape(Xs[0],(1,-1)) + np.reshape(Xs[1], (1,-1)))/2.0

Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@

build:
s2i build -E environment_grpc . seldonio/seldon-core-s2i-python36:0.4-SNAPSHOT seldonio/imagenet_combiner:0.1
s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_combiner:0.1

Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Combiner components for two models' results


### Building
```bash
s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_combiner:0.1

```
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
import numpy as np
import logging
import datetime
import os
import sys
from urllib.parse import urlparse
from google.cloud import storage
from openvino.inference_engine import IENetwork, IEPlugin


def get_logger(name):
logger = logging.getLogger(name)
log_formatter = logging.Formatter("%(asctime)s - %(name)s - "
"%(levelname)s - %(message)s")
logger.setLevel('DEBUG')

console_handler = logging.StreamHandler()
console_handler.setFormatter(log_formatter)
logger.addHandler(console_handler)

return logger

logger = get_logger(__name__)


def gs_download_file(path):
if path is None:
return None
parsed_path = urlparse(path)
bucket_name = parsed_path.netloc
file_path = parsed_path.path[1:]
gs_client = storage.Client()
bucket = gs_client.get_bucket(bucket_name)
blob = bucket.blob(file_path)
tmp_path = os.path.join('/tmp', file_path.split(os.sep)[-1])
blob.download_to_filename(tmp_path)
return tmp_path


def s3_download_file(path):
if path is None:
return None
s3_endpoint = os.getenv('S3_ENDPOINT')
s3_client = boto3.client('s3', endpoint_url=s3_endpoint)
parsed_path = urlparse(path)
bucket_name = parsed_path.netloc
file_path = parsed_path.path[1:]
tmp_path = os.path.join('/tmp', file_path.split(os.sep)[-1])
s3_transfer = boto3.s3.transfer.S3Transfer(s3_client)
s3_transfer.download_file(bucket_name, file_path, tmp_path)
return tmp_path


def GetLocalPath(requested_path):
parsed_path = urlparse(requested_path)
if parsed_path.scheme == '':
return requested_path
elif parsed_path.scheme == 'gs':
return gs_download_file(path=requested_path)
elif parsed_path.scheme == 's3':
return s3_download_file(path=requested_path)


class Prediction(object):
def __init__(self):
try:
xml_path = os.environ["XML_PATH"]
bin_path = os.environ["BIN_PATH"]

except KeyError:
print("Please set the environment variables XML_PATH, BIN_PATH")
sys.exit(1)

xml_local_path = GetLocalPath(xml_path)
bin_local_path = GetLocalPath(bin_path)
print('path object', xml_local_path)

CPU_EXTENSION = os.getenv('CPU_EXTENSION', "/usr/local/lib/libcpu_extension.so")

plugin = IEPlugin(device='CPU', plugin_dirs=None)
if CPU_EXTENSION:
plugin.add_cpu_extension(CPU_EXTENSION)
net = IENetwork(model=xml_local_path, weights=bin_local_path)
self.input_blob = next(iter(net.inputs))
self.out_blob = next(iter(net.outputs))
self.batch_size = net.inputs[self.input_blob].shape[0]
self.inputs = net.inputs
self.outputs = net.outputs
self.exec_net = plugin.load(network=net, num_requests=self.batch_size)


def predict(self,X,feature_names):
start_time = datetime.datetime.now()
results = self.exec_net.infer(inputs={self.input_blob: X})
predictions = results[self.out_blob]
end_time = datetime.datetime.now()
duration = (end_time - start_time).total_seconds() * 1000
logger.debug("Processing time: {:.2f} ms".format(duration))
return predictions.astype(np.float64)

Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# OpenVINO prediction component

Model configuration is implemented using environment variables:

`XML_PATH` - s3, gs or local path to xml file in OpenVINO model server

`BIN_PATH` - s3, gs or local path to bin file in OpenVINO model server

When using GCS make sure you added also GOOGLE_APPLICATION_CREDENTIALS variable and mounted correspondent token file.

In case is S3 or Minio storage add appropriate environment variables with the credentials.

Component is executing inference operation. Processing time is included in the components debug logs.

Model input and output tensors are determined automatically. There is assumed only one input tensor and output tensor.

### Building example:

```bash
s2i build -E environment_grpc . seldon_openvino_base:latest seldon-openvino-prediction:0.1
```
The base image `seldon_openvino_base:latest` should be created according to this [procedure](../../../../../wrappers/s2i/python_openvino)


### Local testing example:

```bash
docker run -it -v $GOOGLE_APPLICATION_CREDENTIALS:/etc/gcp.json -e GOOGLE_APPLICATION_CREDENTIALS=/etc/gcp.json \
-e XML_PATH=gs://inference-eu/models_zoo/resnet_V1_50/resnet_V1_50.xml \
-e BIN_PATH=gs://inference-eu/models_zoo/resnet_V1_50/resnet_V1_50.bin
starting microservice
2019-02-05 11:13:32,045 - seldon_core.microservice:main:261 - INFO: Starting microservice.py:main
2019-02-05 11:13:32,047 - seldon_core.microservice:main:292 - INFO: Annotations: {}
path object /tmp/resnet_V1_50.xml
net = IENetwork(model=xml_local_path, weights=bin_local_path)
2019-02-05 11:14:19,870 - seldon_core.microservice:main:354 - INFO: Starting servers
2019-02-05 11:14:19,906 - seldon_core.microservice:grpc_prediction_server:333 - INFO: GRPC microservice Running on port 5000
```



Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
MODEL_NAME=Prediction
API_TYPE=GRPC
SERVICE_TYPE=MODEL
PERSISTENCE=0

Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
google-cloud-storage==1.13.0
boto3==1.9.34
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
import numpy as np
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.preprocessing import image
from seldon_core.proto import prediction_pb2
import tensorflow as tf
import logging
import sys
import io
import datetime
import cv2
import os

logger = logging.getLogger(__name__)

Expand All @@ -14,26 +13,48 @@ def __init__(self, metrics_ok=True):
print("Init called")
f = open('imagenet_classes.json')
self.cnames = eval(f.read())

self.size = os.getenv('SIZE', 224)
self.dtype = os.getenv('DTYPE', 'float')
self.classes = os.getenv('CLASSES', 1000)

def crop_resize(self, img,cropx,cropy):
y,x,c = img.shape
if y < cropy:
img = cv2.resize(img, (x, cropy))
y = cropy
if x < cropx:
img = cv2.resize(img, (cropx,y))
x = cropx
startx = x//2-(cropx//2)
starty = y//2-(cropy//2)
return img[starty:starty+cropy,startx:startx+cropx,:]

def transform_input_grpc(self, request):
logger.debug("Transform called")
b = io.BytesIO(request.binData)
img = image.load_img(b, target_size=(227, 227))
X = image.img_to_array(img)
X = np.expand_dims(X, axis=0)
X = preprocess_input(X)
X = X.transpose((0,3,1,2))
logger.info("Transform called")
start_time = datetime.datetime.now()
X = np.frombuffer(request.binData, dtype=np.uint8)
X = cv2.imdecode(X, cv2.IMREAD_COLOR) # BGR format
X = self.crop_resize(X, self.size, self.size)
X = X.astype(self.dtype)
X = X.transpose(2,0,1).reshape(1,3,self.size,self.size)
logger.info("Shape: %s; Dtype: %s; Min: %s; Max: %s",X.shape,X.dtype,np.amin(X),np.amax(X))
jpeg_time = datetime.datetime.now()
jpeg_duration = (jpeg_time - start_time).total_seconds() * 1000
logger.info("jpeg preprocessing: %s ms", jpeg_duration)
datadef = prediction_pb2.DefaultData(
names = 'x',
tftensor = tf.make_tensor_proto(X)
)
end_time = datetime.datetime.now()
duration = (end_time - start_time).total_seconds() * 1000
logger.info("Total transformation: %s ms", duration)
request = prediction_pb2.SeldonMessage(data = datadef)
return request

def transform_output_grpc(self, request):
logger.debug("Transform output called")
logger.info("Transform output called")
result = tf.make_ndarray(request.data.tftensor)
result = result.reshape(1,1000)
result = result.reshape(1,self.classes)

single_result = result[[0],...]
ma = np.argmax(single_result)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@

build:
s2i build -E environment_grpc . seldonio/seldon-core-s2i-python36:0.5-SNAPSHOT seldonio/imagenet_transformer:0.1
s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_transformer:0.1
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
## Transformer component

Exemplary implementation of data transformation tasks.

Input transformation function accepts as input the binary representation of jpeg content.
It performs the following operations:
- convert compressed jpeg content to numpy array (BGR format)
- crop/resize the image to the square shape set in the environment variable `SIZE` (by default 224)
- transpose the data to NCWH


Output transformation function is consuming the imagenet classification models.
It is converting the array including probability for each imagenet classes into the class name.
It is returning 'human readable' string with most likely class name.
The function is using `CLASSES` environment variable to define the expected number of classes in the output.
Depending on the model it could be 1000 (the default value) or 1001.


### Building example:
```bash
s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_transformer:0.1
```

The base image `seldon_openvino_base:latest` should be created according to this [procedure](../../../../../wrappers/s2i/python_openvino)
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ MODEL_NAME=ImageNetTransformer
API_TYPE=GRPC
SERVICE_TYPE=TRANSFORMER
PERSISTENCE=0

Original file line number Diff line number Diff line change
@@ -1,3 +0,0 @@
numpy>=1.8.2
keras
pillow
Loading

0 comments on commit fde3aa7

Please sign in to comment.