Skip to content

ShashankKumbhare/facial-keypoints-detecter

Repository files navigation

Facial Keypoints Detection

A computer vision project to build a facial keypoints detection system.

Table of Contents


Project Overview

  • Facial keypoints detection system has variety of applications, including:
    • Facial tracking.
    • Facial pose recognition.
    • Facial filters.
    • Emotion recognition.
    • Medical diagnosis: Identifying dysmorphic facial symptoms.
  • Detecting facial keypoints is a challenging problem given the variations in both facial features as well as image conditions. Facial features may differ according to size, position, pose and expression, while image qualtiy may vary with illumination and viewing angle.
  • In this project, Convolutional Neural Network (CNN) based facial keypoints detector system has been implemented to detect 68 facial keypoints (also called facial landmarks) around important areas of the face: the eyes, corners of the mouth, the nose, etc. using computer vision techniques and deep learning architectures.
  • The project is broken up into a few main parts in 4 Python notebooks:
    • Notebook 1: Loading and Visualizing the Facial Keypoint Data.
    • Notebook 2: Defining and Training a Convolutional Neural Network (CNN) to Predict Facial Keypoints.
    • Notebook 3: Facial Keypoint Detection Using Haar Cascades and a Trained CNN.
    • Notebook 4: Applications - Facial filters, Face Blur.
  • The implemented Python package code is facial_keypoints_detecter.

Data Description

  • Facial keypoints are the small magenta dots shown on each of the faces in the image above.
  • In each training and test image, there is a single face and 68 keypoints, with coordinates (x, y), for that face.
  • These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.

Training and Testing Data

Original + Augmented data

Note: Datasets are explored in Notebook 1.
Note: This set of image data has been extracted from the YouTube Faces Dataset, which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.

mkdir data

wget -P data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip

unzip -n data/train-test-data.zip -d data

Results

  • A custom made Python package facial_keypoints_detecter which contains a classifier, plotting & feature extraction functionalities, and datasets for the project.
  • The trained model has been implemented for 2 example applications:
    1. Facial filters
    2. Face blur

Facial filters

Face blur

Feature visualization


Python package facial_keypoints_detecter

  • This project utilizes a custom-made package facial_keypoints_detecter which contains a classifier, plotting & feature extraction functionalities, and datasets for the project.
  • Libraries used: Python 3, PyTorch, torchvision, OpenCV-Python, Matplotlib, pandas, numpy.
  • This library contains a cnn model, pre-processing tools, plotting tools, and datasets loading tools for this project.
  • facial_keypoints_detecter contains a cnn model, pre-processing
  • Main libraries used: PyTorch, OpenCV-Python, matplotlib, pandas, numpy.

Dependencies

Python 3, PyTorch, torchvision, OpenCV-Python, Matplotlib, pandas, numpy.

Installation

# Install package from PyPI >>
pip install facial_keypoints_detecter
# or
# Install package from GitHub >>
pip install git+https://github.com/ShashankKumbhare/facial-keypoints-detecter.git#egg=facial-keypoints-detecter

Package usage has benn demonstrated in Notebook 1, Notebook 2, Notebook 3, Notebook 4.