Do Yoga Poses Better!
Visit: https://react-yoga.netlify.app
or
https://www.reactyoga.ml
Report Bug
Table of Contents
The goal of this project is to classify the yoga poses performed by the user using machine learning so that they correctly perform the yoga poses.
- Select one of the available pose (asana) and get score for performing that pose correctly
- Beat your best score
- Become proficient in performing asanas
- Toggle between Light and Dark modes
This web app is developed using React and TensorflowJS
- React - To build the user interface
- TensorFlowJS - To classify the yoga poses using PoseNet and custom Neural Network model
- React-Router - The standard routing library for React
- React-Webcam - Webcam component for React
- React-CSV - To generate the CSV file of the dataset
- Papa Parse - To parse the CSV files in browser
The live version of this web app is deployed at: https://react-yoga.netlify.app
To set up locally, follow these simple steps:
- Clone the repo
git clone https://github.com/shakib1729/react-yoga.git
- Install NPM packages
npm install
- Run the project
npm start
- PoseNet model of TensorFlowJS was used to collect the dataset.
- The PoseNet model takes input an image of a body and outputs the coordinates (x and y) of 17 keypoints alongwith their name (like nose, leftEye, rightEye, etc.) and confidence scores. So each image gives 17*2 = 34 values. These 34 values are features of a single data point.
- In this way, around 300 data points for each pose were taken using live webcam feed.
- Two CSV files (X and Y) containing the data points along with their lable were generated using react-csv,
- These CSV files are the dataset for the neural network.
- A neural network was created using TensorFlowJS to perform the classification. It had two dense layers: the first dense layer had 10 units and activation function was 'relu'. The second dense layer was the output layer with 3 units and activation function was 'softmax'.
- The dataset files 'X' and 'Y' which were created in the previous step were parse using PapaParse and were shuffled and split such that 85% was the training data and 15% was the testing data.
- The following parameters were set:
Learning rate: 0.01
Number of epochs: 40
Optimizer: Adam optimizer
Loss: Categorical cross-entropy
Metrics: Accuracy - The model was trained in the browser itself and accuracy of 99.12% was achieved.
- The neural network model created in the last step was used to make predictions on live webcam feed.
- Input from webcam was feeded to PoseNet model which generated a datapoint of 34 features.
- This datapoint was used as an input to the neural network model and prediction corresponding to that input image was obtained
- The layout of webapp was created using CSS Grid
- User interface was created in React
- Routing was added to navigate to the about page
- Option to toggle between light and dark mode was added
- Stored theme preference and best scores in localStorage
- The images of poses are taken from YogaPedia
- Following Youtube channels helped a lot in making this project possible: