Skip to main content

micrograd2023 was developed based on Andrej Karpathy micrograd with added documentations using nbdev for teachning purposes

Project description

micrograd2023

Literate Programming

flowchart LR
  A(Andrej's micrograd) --> C((Combination))
  B(Jeremy's nbdev) --> C
  C -->|Literate Programming| D(micrograd2023)

Disclaimers

micrograd2023, an automatic differentiation software, was developed based on Andrej Karpathy’s micrograd.

Andrej is the man who needs no introduction in the field of Deep Learning and Computer Vision. He released a series of lectures called Neural Network from Zero to Hero, which I found extremely educational and practical. I am reviewing the lectures and creating notes for myself and for teaching purposes.

mirograd2023 was written using nbdev, which was developed by Jeremy Howard, the man who needs no introduction in the field of Deep Learning. Jeremy also created fastai Deep Learning software library and Courses that are extremely influential. I highly recommend fastai if you are interested in starting your journey and learning with ML and DL.

nbdev is a powerful tool that can be used to efficiently develop, build, test, document, and distribute software packages all in one place, Jupyter Notebook (I used Jupyter Notebooks in VS Code). In this tutorial, you will learn how to use nbdev to develop software micrograd2023.

Demonstrations

  • A detailed demonstration of micrograd2023 for training and integrating MLP can be found in this MLP DEMO.

  • A demonstration of micrograd2023 for Physics for auto-differentiation of a popular cosine function can be found in this Physics Cosine DEMO.

    • Comparing the micrograd2023 results with the analytical solutions, pytorch’s autograd, and jax’s autograd.
    • Additionally, second-order derivatives are calculated using jax’s autograd.
    • it is possible to use jax’s autograd to calculate higher-order derivatives.
  • A demonstration of micrograd2023 for Physics for auto-differentiation of a popular exponential decay function can be found in this Physics Exp. DEMO.

  • A demonstration of micrograd2023 for Physics for auto-differentiation of a damping function can be found in this Physics Damp DEMO.

  • A demonstration of micrograd2023 for MRI for auto-differentiation of a T2* decay model of data acquired from a multi-echo UTE sequence. Additionally, the auto-differentiations then be used to calculate the Fisher Information Matrix (FIM), which then allows calculations of Cramer-Rao Lower Bound (CRLB) of an un-bias estimator of T2*. Details can be seen at MRI T2* Decay DEMO.

  • A demonstration of micrograd2023 for MRI for auto-differentiation of a T1 recovery model of data acquired from a myocardial MOLLI T1 mapping sequence. Additionally, the auto-differentiations then be used to calculate the Fisher Information Matrix (FIM), which then allows calculations of Cramer-Rao Lower Bound (CRLB) of an un-bias estimator of T1. Details can be seen at MRI T1 Recovery DEMO.

Features

Compared to Andrej’s micrograd, micrograd2023 has many extensions such as:

  • Adding more and extensive unit and integration tests.

  • Adding more methods for Value object such as tanh(), exp(), and log(). In principle, any method/function with known derivative or can be broken into primitive operations can be added to the Value object. Examples are sin(), sigmoid(), cos(), etc., which I left as exercises 😄.

  • Refactoring Andrej’s demo code make it easier to demonstrate many fundamental concepts and/or best engineering practices when training neural network. The concepts/best-practices are listed below. Some concepts were demonstrated while the rest are left as exercises 😄.

    • Always implemented a simplest and most intuitive solution as a baseline to compare with whatever fancy implementations we want to achieve

    • Data preparation - train, validation, and test sets are disjointed

    • Over-fitting

    • Gradient Descent vs. Stochastic Gradient Descent (SGD)

    • Develop and experiment with different optimizations i.e. SGD, SGD with momentum, rmsProp, Adam, etc.

    • SGD with momentum

    • Non-Optimal learning rate

    • How to find the optimal learning rate

    • Learning rate decay and learning rate schedule

    • Role of nonlinearity

    • Linear separable and non-separable data

    • Out of distribution shift

    • Under-fitting

    • The importance and trade-off between width and depth of the MLP

    • Over-fitting a single-batch

    • Hyperparameter tuning and optimizing

    • Weights initialization

    • Inspect and visualize statistics of weights, gradients, gradient to data ratios, and update to data ratios

    • Forward and backward dynamics of shallow and deep linear and non-linear Neural Network

    • etc.

If you study lectures by Andrej and Jeremy you will probably notice that they are both great educators and utilize both top-down and bottom-up approaches in their teaching, but Andrej predominantly uses bottom-up approach while Jeremy predominantly uses top-down one. I personally fascinated by both educators and found values from both of them and hope you are too!

Related Projects

Below are a few of my projects related to optimization and Deep Learning:

  • Diploma Research on Crystal Structure using Gradient-based Optimization SLIDES

  • Deep Convolution Neural Network (DCNN) for MRI image segmentation with uncertainty quantification and controllable tradeoff between False Positive and False Negative. Journal Paper PDF and Conference Talk SLIDES

  • Deep Learning-based Denoising for quantitative MRI. Conference Talk SLIDES

  • Besides technical projects, I had an opportunity to contribute and engage in the whole process of 510(k) FDA clinical validation of Deep Learning-based MRI Reconstruction resulting the worlds-first fully integrated Deep Learning-based Reconstruction Technology to receive Food and Drug Administration (FDA) 510(k)-clearance for use in clinical environment. Product Page, Whitepaper HTMLs, Whitepaper PDF, and Whitepaper PDF2

How to install

The micrograd2023 package was uploaded to PyPI and can be easily installed using the below command.

pip install micrograd2023

Developer install

If you want to develop micrograd2023 yourself, please use an editable installation.

git clone https://github.com/hdocmsu/micrograd2023.git

pip install -e "micrograd2023[dev]"

You also need to use an editable installation of nbdev, fastcore, and execnb.

Happy Coding!!!

How to use

Here are examples of using micrograd2023.

# import necessary objects and functions
from micrograd2023.engine import Value
from micrograd2023.nn import Neuron, Layer, MLP
from micrograd2023.utils import draw_dot
import random
# inputs xs, weights ws, and bias b
w1 = Value(1.1)
x1 = Value(0.5)
w2 = Value(0.12)
x2 = Value(1.7)
b = Value(0.34)

# pre-activation
s = w1*x1 + x2*w2 + b

# activation
y = s.tanh()

# automatic differentiation
y.backward()

# show the computation graph of the perceptron
draw_dot(y)

# added random seed for reproducibility
random.seed(1234)
n = Neuron(3)
x = [Value(0.15), Value(-0.21), Value(-0.91) ]
y = n(x)
y.backward()
draw_dot(y)

You can use micrograd2023 to train a MLP and learn fundamental concepts such as overfilling, optimal learning rate, etc.

Good training

Overfitting

Testings

To perform unit testing, using terminal to navigate to the directory, which contains tests folder, then simply type python -m pytest on the terminal. Note that, PyTorch is needed for the test to run since derivatives calculated using micrograd2023 are compared against those calculated using PyTorch as references.

python -m pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

micrograd2023-0.2.3.tar.gz (16.9 kB view details)

Uploaded Source

Built Distribution

micrograd2023-0.2.3-py3-none-any.whl (10.6 kB view details)

Uploaded Python 3

File details

Details for the file micrograd2023-0.2.3.tar.gz.

File metadata

  • Download URL: micrograd2023-0.2.3.tar.gz
  • Upload date:
  • Size: 16.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for micrograd2023-0.2.3.tar.gz
Algorithm Hash digest
SHA256 40d7a05f563cc874ba2673854c2f725211ead6a3f28a453e83c3c52436532abf
MD5 0bb1e2dc7114d0a3333400199b5690a5
BLAKE2b-256 5d1a1b61a17a4bbaf1ca25b226d77cde9c7ee415f76f467baa929d3a4f88cf38

See more details on using hashes here.

File details

Details for the file micrograd2023-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for micrograd2023-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 8f19fe8499dde9aee09a767dd61d744f54753e9d639f7ba2b590304bc868869f
MD5 18985c3714bb8ec66b9d63494fc10b59
BLAKE2b-256 cc4864e59e9109e99a083ad7f24b69bd0964a4827b8599717a00d89c684b1038

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page