Skip to content

[Shiny/ Golem/ Tensorflow] Production grade shiny app for the prediction of handwritten ✅ or ❌ boxes using a convolutional neural network.

License

Unknown, MIT licenses found

Licenses found

Unknown
LICENSE
MIT
LICENSE.md
Notifications You must be signed in to change notification settings

EmanuelSommer/YesNoDetect

Repository files navigation

YesNoDetect

Not Maintained

YesNoDetect is a shiny application for the

prediction of handwritten ✅ or ❌ boxes.

It is written using the golem framework for production-grade and robust shiny apps. The shiny app is embedded inside the structure of an R package which allows not only a concise and well structured workflow but also the integration of documentation (roxygen2) and tests (testthat). The prediction model is a convolutional neural network (CNN) implemented via the Keras API of Tensorflow.

The main objectives for me writing this code were to dive into the following packages/ frameworks:

  • golem: framework for building production-grade shiny applications
  • shinymodules: modularize shiny apps
  • tensorflow/ keras: modeling deep neural networks

The following packages were also new to me:

  • fullPage: fullPage.js framework for shiny
  • waiter: waiting screens for shiny
  • emo: emojis 😜
  • googlesheets4: use googlesheets (as database)

Installation

Please note that the reposatory is no longer maintained and thus the code might be deprecated due to newer non backward compatible versions of the used dependencies. - I promise future projects will make use of {renv} 💡

You can install the current version of YesNoDetect with the following line of code:

# install.packages("devtools")
devtools::install_github("EmanuelSommer/YesNoDetect")

You can run the app locally by calling the following:

YesNoDetect::run_app()

Note: Of course for the local usage the functions in fct_db_handling.R must be changed in such a way that they use your permissions file for googlesheets which also has to be initiated.

Structure of the app

  • Database: Explore and amend the database which is the foundation of the modeling process.
  • Model: The current model architecture, stats and valuation metrics.
  • Predict: Challenge the model with your handwritten boxes!

Database

To model the handdrawn boxes I obviously needed training data. To get it I did set up this section. As database I chose a googlesheets file and implemented functions to write and read it and supplied the permissions needed. The boxes are saved as 28x28 black and white pictures (as a flattened 28x28 matrix).

First slide

The first of the two slides of this section does give a short overview of the database:

  • absolute number of boxes (yes, no, total)
  • plot displaying the proportion of yes vs no boxes
  • plots of the average yes and no box

Database overview section

Second Slide

The second of the two slides of this section allows the user to amend the database:

  • draw the box
  • label it
  • save it

Amend database section

Model

In this section one can find the most important information about the current model used. After reading several papers to get a good grasp of some of the state of the art architectures of CNNs for image recognition I implemented 3 CNNs in the Rmarkdown document in the folder modeling. These were trained and tested on independent test sets and then the best model is chosen and saved for usage in the app. For a closer look at the modeling process just read through the Rmarkdown document. It also contains code to visually inspect wrong predictions and more.

The first slide of this section contains a visualization of the archictecture of the current model.

Model architecture

The second slide of this section shows essential core and valuation stats of the current model.

Model stats

Predict!

In this last section one can challenge the current model to predict ones handwritten ✅ or ❌ boxes and recieves a probability for each class.

Predict!

Known issues

Drawing not very intuitive

The drawing is not done by clicking, holding and releasing the mouse but by click, hover and click again. This is not the intuitive way. As the drawing was implemented by an interactive plot using the hover and click parameters I could not find a way to change to the intuitive way of drawing, at least not in a reasonable searching period. Moreover I did not find any other (no interactive plots) solutions for handdrawing in shiny apps and efficiently saving the drawing numerically. I guess some Javascript via the shinyjs package could do the trick.

Not touch friendly

The app is not touch friendly this does not only apply to the drawing of the boxes but the whole fullPage UI seems not to work on mobile well at all. To deal with this one could switch to shinyMobile but I will try this out another time.

If you have any suggestions to solve the known issues or you have new ones I would really like to hear from you!


👋 Have fun! 👋

About

[Shiny/ Golem/ Tensorflow] Production grade shiny app for the prediction of handwritten ✅ or ❌ boxes using a convolutional neural network.

Topics

Resources

License

Unknown, MIT licenses found

Licenses found

Unknown
LICENSE
MIT
LICENSE.md

Stars

Watchers

Forks

Languages