Using Transfer learning for hand drawn doodles predictions live in browser.
This was just a fun experiment to make predictions on hand-drawn sketch or doodles using transfer learning. For this, I used 'DoodleNet' Convolutional Neural Network (CNN) model. This model was created by 'Yining Shi' and the model is hosted on ml5 library. So using p5 and ml5, I was able to write a small and high level code to load the pretrained model and then make a prediction on the model.
- JS, HTML, CSS
- Tensorflow.JS
- Packages: ml5.JS, p5.Js.
This pretrained model was trained using 345 categories of doodles with 50K images per category dataset, which is gives total 17,250,000 images, so getting the data, preprocessing and training the data would have taken alot of time and resources. This data was collected from "Google’s Quick, Draw!" game, which at this point contains 50M hand-drawn images. To check the data and categories please visit "Google’s Quick Draw!, The Data".
some examples from the 101Food datasets:
Model was loaded and predicted using ml5 JS libaray, which provides high level coding interface for users, so that they can easily deploy and
the models that are hosted by ml5. Build mostly using p5-1.4.1 library.
The model predicts all the 10 images correct with high probability score. These images were collected from different sources.