Lip-reading is the understanding of the speaker's lips shape developments while talking. Utilizing the visual information obtained from localization of facial components, the words being verbally expressed by a client can be deciphered. Investigations across several years report expanded programming-based speech understandability which is combined with the visual information of facial expressions for robust sound speech acknowledgment. In any case, the majority of the prior lip-reading programming projects are not freely available for clients to use and consolidate into their work, and further, for incorporation into better projects. Consequently, this project intends to build up open source software that can distinguish lip developments and decipher the words being spoken by the speaker. The subject task is achieved by gathering visual information of lip developments and preparing the obtained data through a machine learning algorithm for an ultimate training of a deep learning model. This pre-processing of video data is found to break down the information productively and adequately delivering dependable outcomes. This software has a wide span of applications, for example foreseeing what a speech debilitated individual needs to say, watching out for individual discussions and automating the identification of offensive words or expressions in speech.
Python
DLIB, Keras
https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrw1.html
For more information related to the project and dataset you can contact me at: usamatariq135@gmail.com