Skip to content

Detecting a face within an image using Deep Learning techniques

Notifications You must be signed in to change notification settings

Chris-Manna/face_detector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Face Detector

Detect faces in images through the use of Transfer Learning.

  • Use Deep Learning techniques CNNs, DNNs, and Transfer Learning.
  • Graph training and validation accuracy to visualize the accuracy loss to prevent overfitting.
  • Test facial verification using Euclidean and Cosine distance metrics on photo similarity matrices.

Technologies used: Densely Connected Networks, Convolutional Neural Networks Transfer Learning Libraries: MobileNet, InceptionNet, VGG16

Obtain

Import images:

Scrub and Clean the DataSet

  • Rescale images to the same dimensions with keras.preprocessing.image library.
  • Split the images to train our Supervised Machine Learning classification models.

Training Models

Densely Connected Network (DCN)

  • Adding two hidden layers, testing different numbers of nodes and activation functions.
  • Optimize using SGD, and binary_crossentropy for thirty epochs which yielded a 99.86% accuracy. Tools used: Tensorflow, SGD, binary_crossentropy

Convolutional Neural Network

  • Achieve an F1 score of 100%.
  • Use convolutional neural network to input sequential layers.
  • Mix node equations in different ways for each layer.

Drop-out Regularization - Addressing overfitting

  • Remove 50% of the nodes from the subsequent layer allowing for a more robust interpretation of neural network.
  • Train accuracy went down and our F1 scores went down.
  • Achieve more robust, neural network.

Transfer Learning

Using Transfer Learning, replace the last layer of the neural network to classify and differentiate your target in the images.

  • Transfer Learning models used Deep Learning Neural Networks that have been trained on millions of images. Explanation: Models have been tuned on millions of pictures, the weights of each of these nodes have captured robust nuances of the intended target within the photos.

MobileNet

InceptionNet

VGG16 here

  • Use a sigmoid function to determine if the image showed a face or not for the last layers.

Evaluating Models:

  • Achieve an F1 Score of 99.84%
  • Compare training and validation loss functions against training and validation accuracy to ensure good fit.
  • Fit at 60 epochs, went down to 10 epochs.

Implement

  • Test individual images using the model.

Summary

The best results were achieved through Transfer Learning library VGG16.

Next Steps

  • Incorporate video.
  • Facial Verification.
  • Implement on Arduino project.

Releases

No releases published

Packages

No packages published