autodiff engine inspired by tinygrad
Project description
SkinnyGrad
SkinnyGrad is a tensor autodifferentiation library that I wrote as a side project for fun and learning. By default, a computational graph is built and evaluated lazily with NumPy. GPU acceleration is also available with the CuPy backend extension. At ~1300 lines, skinnygrad is written with simplicity and extensibility in mind. It nevertheless covers a good subset of the features of a torch.Tensor
. Kudos to tinygrad, which inspired the RISC-like design of mapping all high level ops to 19 low level ops that the runtime engine optimizes and executes.
Try it out!
pip install skinnygrad
import skinnygrad
a = tensors.Tensor(((1, 2, 3)))
b = tensors.Tensor(10)
x = tensors.Tensor(((4,), (5,), (6,)))
y = a @ x + b
print(y)
# <skinnygrad.tensors.Tensor(
# <skinnygrad.llops.Symbol(UNREALIZED <Op(ADD)>, shape=(1, 1))>,
# self.requires_grad=False,
# self.gradient=None,
# )>
print(y.realize())
# [[42]]
LeNet-5 as a convergence test
As an end-to-end test for the engine, I replicated the LeNet-5 paper and trained it for 5 epochs on MNIST, recovering the 98% eval accuracy. With a batch size of 64 it takes a few minutes per training epoch (60k images) using the CuPy GPU acceleration backend on a Nvidia A100 GPU. The code can be found in the examples folder.
BONUS: LeNet-5 forward pass built up by the skinnygrad engine
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for skinnygrad-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bcca67f524dd448f43dd5b40b7a01bba52ffdc6ad662b99c1dcd9e01cf0cc1fb |
|
MD5 | 69ac81326829382d6ad268d30d3d078e |
|
BLAKE2b-256 | bc8e4deb74f7b2f9ff2f14d43d60a749eb8a5b81f5b4f8d357a8a90ef4059a2b |