Skip to content

Open source framework for deploying machine learning models on Kubernetes

License

Notifications You must be signed in to change notification settings

Maximophone/seldon-core

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Seldon Core

Branch Status
master Build Status
release-0.1 Build Status

Seldon Core is an open source platform for deploying machine learning models on Kubernetes.

Goals

Machine learning deployment has many challenges. Seldon Core intends to help with these challenges. Its high level goals are:

  • Allow data scientists to create models using any machine learning toolkit or programming language. We plan to initially cover the tools/languages below:
    • Python based models including
      • Tensorflow models
      • Sklearn models
    • Spark models
    • H2O models
    • R models
  • Expose machine learning models via REST and gRPC automatically when deployed for easy integration into business apps that need predictions.
  • Allow complex runtime inference graphs to be deployed as microservices. These graphs can be composed of:
    • Models - runtime inference executable for machine learning models
    • Routers - route API requests to sub-graphs. Examples: AB Tests, Multi-Armed Bandits.
    • Combiners - combine the responses from sub-graphs. Examples: ensembles of models
    • Transformers - transform request or responses. Example: transform feature requests.
  • Handle full lifecycle management of the deployed model:
    • Updating the runtime graph with no downtime
    • Scaling
    • Monitoring
    • Security

Prerequisites

A Kubernetes Cluster.
Kubernetes can be deployed into many environments, both in cloud and on-premise.

Quick Start

Advanced Tutorials

  • Advanced graphs showing the various types of runtime prediction graphs that can be built.

Example Components

Seldon-core allows various types of components to be built and plugged into the runtime prediction graph. These include routers, transformers and combiners. Some components that are available as part of the project are:

Integrations

  • seldon-core can be installed as part of the kubeflow project. A detailed end-to-end example provides a complete workflow for training various models and deploying them using seldon-core.

Install

Official releases can be installed via helm from the repository https://storage.googleapis.com/seldon-charts.

To install seldon-core:

helm install seldon-core-crd --name seldon-core-crd --repo https://storage.googleapis.com/seldon-charts
helm install seldon-core --name seldon-core --repo https://storage.googleapis.com/seldon-charts

To install the optional analytics components including Prometheus and Grafana with a built-in dashboard for monitoring the running ML deployments run:

helm install seldon-core --name seldon-core \
    --set grafana_prom_admin_password=password \
    --set persistence.enabled=false \
    --repo https://storage.googleapis.com/seldon-charts

Deployment Guide

API

Three steps:

  1. Wrap your runtime prediction model.
  2. Define your runtime inference graph in a seldon deployment custom resource.
  3. Deploy the graph.

Reference

Testing

Community

Developer

About

Open source framework for deploying machine learning models on Kubernetes

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 55.0%
  • Smarty 26.1%
  • Jupyter Notebook 10.2%
  • Python 4.9%
  • Shell 2.5%
  • Makefile 1.3%