Skip to content

This repository contains the lab files for Microsoft course DAT236x: Deep Learning Explained

Notifications You must be signed in to change notification settings

anyric/Deep-Learning-Explained

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep-Learning-Explained

This repository contains the lab files for Microsoft course DAT236x: Deep Learning Explained

For each of the Modules 2-7, you must complete these labs and then answer the evaluation questions.

Module 2, Lab

Getting Started with Keras

This lab gives a first look at using the Keras package to define, train and evaluate deep learning models with Keras. By the end of this lesson you will be able to work with basic feedforward architecture multi-layer neural nets.

The details of feedforward models and regularization will be introduced in other lessons. For now, focus on becoming comfortable with using Keras so you are prepared for the rest of the labs in this course.

Module 3, Lab

Introduction to Deep Neural Networks

This lab introduces you to the basics of neural network architecture in the form of deep forward networks. This architecture is the quintessential deep neural net architecture. In this lab you will learn the following:

  • Why is deep learning important and how it relates to representation, learning and inference.
  • How a basic preceptron works.
  • How to apply different types of loss functions.
  • Understand why nonlinear activation is important and why rectified linear units are a good choice.
  • How back propagation works, and how you apply the chain rule of calculus to determine gradient.
  • Understand the architectural trade-off between depth and width in deep networks.

Module 4, Lab

Introduction to Regularization for Deep Neural Nets

This lab introduces you to the principles of regularization required to successfully train deep neural networks. In this lesson you will:

  1. Understand the need for regularization of complex machine learning models, particularly deep NNs.
  2. Know how to apply constraint-based regularization using the L1 and L2 norms.
  3. Understand and apply the concept of data augmentation.
  4. Know how to apply dropout regularization.
  5. Understand and apply early stopping.
  6. Understand the advantages of various regularization methods and know when how to apply them in combination.

Module 5, Lab

Optimization for Neural Network Training

Deep neural networks are trained by learning a set of weights. The optimal weights are learned by minimizing the loss function for the neural network. This minimization is performed using an optimization algorithm. Thus, optimization algorithms are an essential component in your neural network tool box.

In this lab you will become familiar with the basic optimization algorithms used to train deep neural networks, along with their pitfalls. The nonlinear nature of neural networks leads to several serious problems with local gradients.

Module 6, Lab

Introduction to Convolutional Neural Networks

This lesson introduces you to a powerful neural network architecture, known as convolutional neural networks. Convolutional neural networks operate by learning a set of filters or convolution kernels. Using a process, known as convolution, these filters extract a feature map from the raw data. The feature map is a lower dimensional map compared to the raw input features.

Module 7, Lab

Introduction to Recurrent Neural Networks

In this lab you will explore recurrent neural networks (RNNs). Recurrent neural networks use a distinctive model which is suitable for sequence data. Sequence data can include human speech, natural language, and numerical time series. Natural language applications include machine translation and question response systems. RNNs can also be applied to multi-dimensional data. For example, RNNs are used to caption images.

MIT License

Copyright (c) 2018 Microsoft

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

This repository contains the lab files for Microsoft course DAT236x: Deep Learning Explained

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%