Working on Model Inference Servers @aws
-
Amazon Web Services
- San Francisco, CA
- https://www.linkedin.com/in/aaquib/
Block or Report
Block or report maaquib
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned Loading
-
pytorch/serve
pytorch/serve PublicServe, optimize and scale PyTorch models in production
-
deepjavalibrary/djl-serving
deepjavalibrary/djl-serving PublicA universal scalable machine learning model deployment solution
-
triton-inference-server/server
triton-inference-server/server PublicThe Triton Inference Server provides an optimized cloud and edge inferencing solution.
-
awslabs/multi-model-server
awslabs/multi-model-server PublicMulti Model Server is a tool for serving neural net models for inference
-
aws/deep-learning-containers
aws/deep-learning-containers PublicAWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet.
-
aws/sagemaker-inference-toolkit
aws/sagemaker-inference-toolkit PublicServe machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.