Skip to content
View KodAgge's full-sized avatar

Block or report KodAgge

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
KodAgge/README.md

Well met! 👋

Welcome to my GitHub! I'm currently a consultant at Netlight within Data Science & Analytics. I have a masters degree in Financial Mathematics and a bachelor's in Applied Mathematics + Industrial Engineering and Management, both at KTH.

Despite my education, my biggest interest is in Data Analytics and Data Science, more specifically in Machine Learning. This is reflected in the repos you can find under my handle. Continue reading for a selection of the projects I'm most proud of!

Languages and Tools:

Python SQL R Matlab PowerBI Qlik Excel VBA Visual Studio Code SAS Data Grip Google Sheets



📈 Reinforcement Learning for Market Making 📉

Readme Card

In collaboration with Skandinaviska Enskilda Baken (SEB), me and a classmate wrote our thesis about Reinforcement Learning for Market Making. Market making is the process of quoting buy and sell prices in a financial asset in order to provide liquidity and earn a profit on the spread. Setting these prices "correctly" is essential to drive volume, minimize risks and earn a profit – which historically has been done using analytical methods. However, derivation of optimal market making strategies is only possible when you make limiting and naïve assumptions on how markets work. There is thus an argument for using reinforcement learning to find better strategies since they do not depend on any of these assumptions.

Using two ways of modelling the market, we were able to use reinforcement learning to find market making strategies. In the first model, for which one can derive analytical strategies, we were able to use tabular Q-learning to find strategies that matched the analytically optimal strategies in performance. In the second model, which is significantly more sophisticated, we compared the performance of tabular Q-learning and Double Deep Q-Network (DDQN) and found that the latter was more suitable for this problem.

For more about our results, have a look at our thesis.

Below follows an illustration of a limit order book (LOB), a central concept of market making.

An illustration of a limit order book


✍️Handwritten Mathematical Expressions to LaTeX-code✍️

Readme Card

As part of the final project in a Deep Learning course at KTH me and three classmates got the idea to of building a model that translates handwritten mathemtical expressions directly to LaTeX-code. This could save us a lot of time since manually entering equations into LaTeX is a very tedious task.

Looking into previous research we found that an Encoder-Decoder model consisting of a convolutional neural network (CNN) and a long short-term memory (LSTM) network would be most promising for our task. We thus constructed an Encoder consisting of a CNN with batch normalization and max-pooling and a Decoder consisting of a LSTM with a soft attention mechanism. For better performance beam search was used during prediction.

While the results weren't super promising for longer expressions the model performed well on some expression I wrote myself. Here are some examples!

Results A

Results B


📦Blackbox Feasibility Prediction📦

Readme Card

Working together with a fintech firm and three classmates we looked into the possibility of using Machine Learning to speed up their operations. The team within the firm we were working with had one main task: solving a constrained non-linear optimization problem using an evolutionary optimization algorithm called CMA-ES. However, there was one problem, deciding upon the feasibility of the solutions the algorithm suggested was computationally heavy. Our task was thus to see if Machine Learning could be used to filter out infeasible solutions. We took an explorative approach, testing a wide range of Machine Learning algorithms, supervised as well as unsupervised. Unfortunetaly, no method yielded useful results. We thinks this is mainly due to the evolutionary algorithm advancing towards the optimum in iterational steps through the large feature space (~4000 dimensions), which means that the classifiers needs to extrapolate.

Below follows a gif of how the CMA-ES moves during its first 100 iterations, projected down to 3 dimensions using PCA.

PCAgif


Wait, there's more!

While the projects above are my favourites, I still have more to show. Take a look at the following list if you want to learn more (some are unfortunately in Swedish):


KodAgge's GitHub stats

Pinned Loading

  1. Advent-of-Code-2022 Advent-of-Code-2022 Public

    Solutions to Advent of Code 2022 with some code golf

    Python

  2. Javigsv/LDA_AdML Javigsv/LDA_AdML Public

    LDA project (Large VI) for DD2434 Machine Learning Advanced Course

    Python 1

  3. StatisticalMachineLearning StatisticalMachineLearning Public

    A collection of project in statistical machine learning

    Jupyter Notebook

  4. Wumpus Wumpus Public

    An implementation of the classic "Hunt the Wumpus" game in pygame

    Python 1

  5. WaterRocketSimulation WaterRocketSimulation Public

    Simulation of a water rocket's flight path and the maximal height reached

    MATLAB 1

  6. RNNs-in-matlab RNNs-in-matlab Public

    A simple recurrent neural network (RNN) implemented from scratch in matlab

    MATLAB