An Autograd engine and a neural network library that handle an N-dimensional array.
Project description
Autograd Engine & Neural Network Library
The repository includes an Autograd engine and a neural network library that handle an N-dimensional array.
Autograd is a tool used for derivative calculation. It tracks operations on values with enabled gradients and builds a dynamic computational graph — a graph without cycles. Input values serve as the leaves of the graph, while output values act as its roots. Gradients are computed by traversing the graph from root to leaf, applying the chain rule to multiply gradients at each step.
Andrej Karaphy's Micrograd served as inspiration for this project. But this Autograd engine will accept N-dimensional array, whereas Microgard accepts scalar values only.
Blog
Building Autograd Engine & Neural Network Library: An Interactive Guide
The article provides a comprehensive guide to building an autograd engine and a neural network library that handle an N-dimensional array. It assumes a basic understanding of Python programming, high school calculus, and neural networks but offers various teaching methods for beginners. It includes line-by-line code explanations, output visualizations, and an interactive area to explore derivatives. The guide covers the foundational concepts of neural networks, starting with derivatives and progressing to backpropagation. It explains how to perform backpropagation manually and programmatically, including implementation techniques. The article also demonstrates the building of an autograd class from scratch and its application to training a neural network on a dataset. It concludes by guiding readers through the development of a simple neural network library using the autograd class.
Installation
pip install ngrad
Development
The only library required for this to work is Numpy, which is used to handle N-dimensional array.
pip install -r requirements.txt
Test
To verify the accuracy of the engine.py code and ensure that it produces the same output for the N-dimensional array, scalar value, and PyTorch APIs, execute the following command (make sure that PyTorch is installed):
cd test/
pip install -r requirements.txt
Run the test:
python test_engine.py
Train
This notebook comprises the code required for training the neural network, encompassing the code for engine.py and library.py. Within this notebook, we first set the input values, construct an MLP with a predetermined architecture, and subsequently execute forward propagation to compute and acquire the output. Subsequently, we initiate the training procedure of the neural network by generating a small dataset, followed by training the MLP model to minimize loss and enhance its prediction abilities. Finally, we present a list of the predicted outputs.
Notebook
This notebook contains a comprehensive collection of examples and code for building, testing, and training the autograd engine and neural network library. Play with it to experiment and explore its functionalities.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file ngrad-1.0.2.tar.gz
.
File metadata
- Download URL: ngrad-1.0.2.tar.gz
- Upload date:
- Size: 6.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45cce98d5476872842fde8e7346b1379b66f43f4d4dceb9b2f78ba49539f5d73 |
|
MD5 | ce642af8758a909e41c563df26a7dd40 |
|
BLAKE2b-256 | f3846198952c819147b01594083d866a2449df20efc58f50c32085bb4914e692 |