Fasttext container for Amazon Sage Maker
-
Updated
Aug 7, 2018 - Python
Fasttext container for Amazon Sage Maker
A PyTorch RNN model for Sentiment Analysis deployed with AWS SageMaker
Simple guide to use tf.estimator and deploy to AWS SageMaker (after training with your GPU)
Build a Custom Object Detection Model from Scratch with Amazon SageMaker and Deploy it at the Edge with AWS DeepLens. This workshop explains how you can leverage DeepLens to capture data at the edge and build a training data set with Amazon SageMaker Ground Truth. Then, train an object detection model with Amazon SageMaker and deploy it to AWS D…
Machine Learning on AWS using various methods/examples
Example of how to use word embedding with BlazingText algorithm in Amazon SageMaker on entire contents of wikipedia for a foreign language (Hebrew).
Jupyter notebooks to help team members utilize AWS SageMaker tools
The repository contains projects and tutorials completed as a part of Udacity Machine Learning Engineer Nanodegree
A small collection of custom kernels for running Sagemaker Notebooks an Training Jobs
Snowflake Guide: Building a Recommendation Engine Using Snowflake & Amazon SageMaker
Setup end to end demo architecture for predicting fraud events with Machine Learning using Amazon SageMaker
Sample code to run Amazon SageMaker endpoint for inference with a ready model from Tensorflow Hub
fun project to train and deploy a text classification model over Women's e-commerce CLothing reviews
Goal: Develop Machine Learning aplication in a distributed environment using AWS services with Spark.
An end-to-end example of a serverless machine learning pipeline for multiclass classification on AWS with SageMaker Pipelines, Data Wrangler, Athena and XGBoost.
Amazon SageMaker DeepAR Spanish Workshop
SageMaker Experiments and DVC
This solution shows how to deliver reusable and self-contained custom components to Amazon SageMaker environment using AWS Service Catalog, AWS CloudFormation, SageMaker Projects and SageMaker Pipelines.
Run Multiple Models on the Same GPU with Amazon SageMaker Multi-Model Endpoints Powered by NVIDIA Triton Inference Server. A Java client is also provided.
Add a description, image, and links to the sagemaker-example topic page so that developers can more easily learn about it.
To associate your repository with the sagemaker-example topic, visit your repo's landing page and select "manage topics."