Skip to content

vanilla, simple, node-oriented, compositive, optimized, frameworkn't{torchn't, TFn't, candlen't}

Notifications You must be signed in to change notification settings

Banyc/neural_network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Network

MNIST

  • steps:
    1. Download MNIST dataset to:
      • TRAIN_IMAGE: local/mnist/train-images.idx3-ubyte
      • TRAIN_LABEL: local/mnist/train-labels.idx1-ubyte
      • TEST_IMAGE: local/mnist/t10k-images.idx3-ubyte
      • TEST_LABEL: local/mnist/t10k-labels.idx1-ubyte
    2. Run:
      cargo test --release -- --include-ignored --nocapture mnist::train
    3. Inspect parameters at: local/mnist/params.ron

Backpropagation

  • distribution of addends of $\frac{\partial G}{\partial f_1}$:
    • a part of the computation graph
    • $h_i : \mathbb{R}^{m_i} \to \mathbb{R}$
    • $f_j : \mathbb{R}^{n_j} \to \mathbb{R}$
    • $h_i$ are the successors of $f_j$
    • $G$ is the outmost function represented by the root node of the computation graph
    • $w$ is the tunable parameters of $f_1$
    • steps:
      1. nodes $h_1, h_2$ calculate the addends respectively
      2. nodes $h_1, h_2$ distribute the addends to $f_1, f_2$
      3. node $f_1$ calculates $\frac{\partial G}{\partial f_1}$ from the received addends
      4. node $f_1$ calculates $\frac{\partial G}{\partial w}$ using $\frac{\partial G}{\partial f_1}$
      5. node $f_1$ updates $w$ using $\frac{\partial G}{\partial w}$

References

Releases

No releases published

Packages

No packages published

Languages