Skip to content

Simple neural network implementation in numpy with a PyTorch-like API

License

Notifications You must be signed in to change notification settings

Samyak2/numpytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

numpytorch

Simple neural network implementation in numpy with a PyTorch-like API

Originally made for an assignment for the course "Machine Intelligence" at PES University. Although the commits are recent, most of the code was written during the course (Oct/Nov 2020) and moved from a different repo.

What can you do with it?

This is not meant to be used for serious workloads. You can use it as a learning tool though. For example, here is a list of things you could try learning from the code here:

  • Modular implementation of neural networks - each layer is a module with many trainable parameters. Refer nn.py
  • Usage of Einstein summation operations in numpy (and in general). Here's a nice reference for Einstein summation.
  • Type annotations in python - the codebase is almost completely type-annotated. This makes the code a little easier to maintain and improves the editing experience significantly for users of the library. Although, mypy does report a few errors, most of the type annotations are correct (PRs are welcome to fix this).

Some possible future plans

I don't plan to develop this further, but if you want to learn, you can try implementing the following (either in your own fork or send a PR!):

  • More activation functions. numpytorch/activations.py has a limited set of activation functions, there are many more you can add.
  • More loss functions. numpytorch/losses.py has only one loss function (binary cross-entropy).
  • More optimisers. numpytorch/optim.py has only one optimiser (Stochastic Gradient Descent, SGD) with support for L2 regularization and momentum. The ADAM optimiser would be a nice addition.
  • Automatic differentiation. Currently, backward passes (derivatives) have to be hand-coded into all the activation functions, layers, etc. Integrating some kind of automatic differentiation library (like autograd or autodidact) would make this a lot less painful to customize. You could also try writing your own automatic differentiation library, that will be a fun project! (ref)
  • Other fancy layers like convolution, recurrent cells, etc.

Acknowledgements

Team members Aayush and Bhargav for helping.

License

numpytorch is MIT Licensed

About

Simple neural network implementation in numpy with a PyTorch-like API

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages