Skip to content

Ramping up to Deep Learning training space. Each exercise covers different technologies we use.

Notifications You must be signed in to change notification settings

vishwakaria/herring-onboarding

Repository files navigation

herring-onboarding

Ramping up to Deep Learning training space. Each exercise covers different technologies we use.

Index

  1. Familirize with CMake
  2. MPI Hello World
  3. Using CPU and GPU memory
  4. MPI All Reduce
  5. NCCL All Reduce
  6. Concurrent Hello World in C++
  7. SIMD
  8. Train Multi-layer Perceptron using PyTorch
  9. Distributed training of neural networks using PyTorch collectives
  10. Custom DistributedDataParallel class in PyTorch
  11. Implement your own collective
  12. Slurm CUDA Streams

About

Ramping up to Deep Learning training space. Each exercise covers different technologies we use.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published