About the project
Project structure
Functionalities that are implemented
Usage
What's next
Supporting developers
Repository holds a library of most common algorithms used manually without using the help of sklearn
or other Machine Learning libraries.
Only used libraries are:
- built-in libraries that belong to Python
numpy
scipy
- only used fordendrogram
function inscipy.cluster.hierarchy
plotly
tqdm
- for showing progress of trainingautograd
- for derivation of functionsjupyterlab
(as development environment and to call all other scripts)
Those can be installed via:
pip install numpy scipy plotly tqdm autograd jupyterlab
The project is structured as follows:
assets/
- containing all graphics that are shown herelibrary/
- containing the respective scripts for all subtasks:classic_learning/
- contains allclassic
Machine Learning algorithmsdeep_learning/
- contains Deep Learning using Neural Networksreinforcement_learning/
- contains Reinforcement Learning algorithmsutils/
- contains different kind ofhelper
functions and classes
Test.ipynb
- aJupyter Notebook
containing a test implementation of all provided functionality
-
- Clustering
Clustering by sklearn
- Dimension Reduction Algorithms (LDA, PCA and ICA)
Dimension Reduction by sklearn
- Gaussian Mixture Models with Expectation Maximization Algorithm
GMM with EM by sklearn
- Gaussian Processes Regression
GP by sklearn
- Linear Regression (with single- and multi-dimensional data support)
Linear Regression by sklearn
- Clustering
deep_learning
- Deep Learning using Neural Networks [containing Convolution-, Pooling-, Dense-, Flatten-, Dropout- and ReLU-Layer]
Deep Learning by Tensorflow
- Deep Learning using Neural Networks [containing Convolution-, Pooling-, Dense-, Flatten-, Dropout- and ReLU-Layer]
-
reinforcement_learning
- see in general:Reinforcement Learning Tutorial by PythonProgramming
- Action Value Iteration for Reinforcement Learning -
Action Value Iteration by Denny Britz & Co
- Q-Learning for Reinforcement Learning -
Q-Learning by Denny Britz & Co
- Genetic Algorithm for DataSet manipulation
Genetic algorithm by sklearn
- Hidden Markov Models
HMM by sklearn
- Action Value Iteration for Reinforcement Learning -
utils
- Data Preprocessing for DataSet manipulation (containing a MinMaxScaler and a train_test_split()-function)
Preprocessing by sklearn
- Data Preprocessing for DataSet manipulation (containing a MinMaxScaler and a train_test_split()-function)
In general all classes and functions can be used exactly as those which are implemented in sklearn
with a training()
, a predict()
and a score()
- if possible - function.
Algorithms that work exactly as describe above, respectively in their sklearn documentation:
- Linear Regression -> Regressor
- Clustering -> Classifier
- Dimension Reduction -
train()
,predict()
andscore()
function arefit()
,fit_transform()
andtransform()
respectively - Gaussian Mixture Models with Expectation Maximization Algorithm -> Classifier
- Gaussian Processes -> Regressor
- Deep Learning using Neural Networks - has its one
score()
function as being theloss()
function in training. Workwise/Usage in the same way asTensorflow Implementation
Algorithms with different work-wise:
- Reinforcement Learning --> since there is no prediction in the workwise of RL, there is no such function implemented. Furthermore there is no (or not yet)
train()
function implemented, since the user is obliged to self-decide whether or not to useQ-Learning
orAction-Value-Iteration
. - Hidden Markov Models --> as they need a sequence to be trained and initial states and observations, the class is used slightly different to
sklearn
-typical work wise. You have to provide a sequence to all of the implemented algorithms, further instructions can be found on top of the class description in thehmm.py
script. - Genetic Algorithm for DataSet manipulation --> since there is no prediction in the workwise of GA, there is no such function implemented. The
train()
function gives back the best 'subdataset' that exists (in the original dataset or mutated from that)
- working on the Deep Learning module
- go through all the algorithms and add further variety