Deploying machine learning model using 10+ different deployment tools
-
Updated
May 30, 2022 - Python
Deploying machine learning model using 10+ different deployment tools
A demo to accompany our blogpost "Scalable Machine Learning with Kafka Streams and KServe"
Client/Server system to perform distributed inference on high load systems.
An end to end machine learning prediction for rossamann store problem
My repo for the Machine Learning Engineering bootcamp 2022 by DataTalks.Club
KServe Inference Graph Example
Hands-on labs on deploying machine learning models with tf-serving and KServe
AWS SageMaker, SeldonCore, KServe, Kubeflow & MLflow, VectorDB
Everything to get industrial kubeflow applications running in production
A scalable RAG-based Wikipedia Chat Assistant that leverages the Llama-2-7b-chat LLM, inferenced using KServe
🪐 1-click Kubeflow using ArgoCD
KServe TrustyAI explainer
Carbon Limiting Auto Tuning for Kubernetes
Kubeflow examples - Notebooks, Pipelines, Models, Model tuning and more
TeiaCareInferenceClient is a C++ inference client library that implements KServe protocol.
Collection of bet practices, reference architectures, examples, and utilities for foundation model development and deployment on AWS.
Hopsworks - Data-Intensive AI platform with a Feature Store
Standardized Serverless ML Inference Platform on Kubernetes
Add a description, image, and links to the kserve topic page so that developers can more easily learn about it.
To associate your repository with the kserve topic, visit your repo's landing page and select "manage topics."