a toy tensor library with opencl support
Project description
froog

froog: fast real-time optimization of gradients
a beautifully compact tensor library
homepage | documentation | pip
froog is an easy-to-read tensor library (25k pip installs!) with OpenCL support for GPU acceleration. Inspired by pytorch, tinygrad, and micrograd.
Installation
pip install froog
More information on downloading froog in the installation docs.
Features
- Custom Tensors
- Backpropagation
- Automatic Differentiation (autograd)
- Forward and backward passes
- ML Operations
- 2D Convolutions (im2col)
- Numerical gradient checking
- Acceleration methods (Adam)
- Avg & Max pooling
- EfficientNet inference
- GPU Support
- and a bunch more
Sneak Peek
Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?
from froog.tensor import Tensor
from froog.nn import Linear
import froog.optim as optim
class mnistMLP:
def __init__(self):
self.l1 = Tensor(Linear(784, 128)) # layer 1
self.l2 = Tensor(Linear(128, 10)) # layer 2
def forward(self, x):
# forward pass through both layers and softmax for output probabilities
return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
model = mnistMLP() # create model
optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer
Overview
The most fundamental concept in all of froog and machine learning frameworks is the Tensor. A tensor is simply a matrix of matrices (more accurately a multi-dimensional array).
You can create a Tensor in froog with:
import numpy as np
from froog.tensor import Tensor
my_tensor = Tensor([1,2,3])
Notice how we had to import NumPy. If you want to create a Tensor manually, make sure that it is a NumPy array!
Tensors
Tensors are the fundamental datatype in froog, and one of the two main classes.
-
def __init__(self, data):- Tensor takes in one param, which is the data. Since
frooghas a NumPy backend, the input data into tensors has to be a NumPy array. - Tensor has a
self.datastate that it holds. this contains the data inside of the tensor. - In addition, it has
self.grad. this is to hold what the gradients of the tensor is. - Lastly, it has
self._ctx. These are the internal variables used for autograd graph construction. This is where the backward gradient computations are saved.
- Tensor takes in one param, which is the data. Since
Properties
shape(self): this returns the tensor shape
Methods
def zeros(*shape): this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32def ones(*shape): this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32def randn(*shape):: this returns a randomly initialized Tensor of *shape
Gradient calculations
froogcomputes gradients automatically through a process called automatic differentiation. it has a variable_ctx, which stores the chain of operations. It will take the current operation, let's say a dot product, and go to the dot product definition infroog/ops.py, which contains a backward pass specifically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
Functions
The other base class in froog is the class Function. It keeps track of input tensors and tensors that need to be saved for backward passes
def __init__(self, *tensors): takes in an argument of tensors, which are then saved.def save_for_backward(self, *x): saves Tensors that are necessary to compute for the computation of gradients in the backward pass.def apply(self, arg, *x): takes care of the forward pass, applying the operation to the inputs.
Register
def register(name, fxn): allows you to add a method to a Tensor. This allows you to chain any operations, e.g. x.dot(w).relu(), where w is a tensor
Creating a model
Okay cool, so now you know that froog's main datatype is a Tensor and uses NumPy in the background. How do I actually build a model?
Here's an example of how to create an MNIST multi-layer perceptron (MLP). We wanted to make it as simple as possible for you to do so it resembles very basic Python concepts like classes. There are really only two methods you need to define:
__init__that defines layers of the model (here we useLinear)forwardwhich defines how the input should flow through your model. We use a simple dot product with aLinearlayer with aReLUactivation.
To create an instance of the mnistMLP model, do the same as you would in Python: model = mnistMLP().
We support a few different optimizers, here which include:
- Stochastic Gradient Descent (SGD)
- Adaptive Moment Estimation (Adam)
- Root Mean Square Propagation (RMSProp)
from froog.tensor import Tensor
import froog.optim as optim
from froog.nn import Linear
class mnistMLP:
def __init__(self):
self.l1 = Tensor(Linear(784, 128))
self.l2 = Tensor(Linear(128, 10))
def forward(self, x):
return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
model = mnistMLP()
optim = optim.SGD([model.l1, model.l2], lr=0.001)
You can also create a convolutional neural net by
class SimpleConvNet:
def __init__(self):
conv_size = 5
channels = 17
self.c1 = Tensor(Linear(channels,1,conv_size,conv_size)) # (num_filters, color_channels, kernel_h, kernel_w)
self.l1 = Tensor(Linear((28-conv_size+1)**2*channels, 128)) # (28-conv+1)(28-conv+1) since kernel isn't padded
self.l2 = Tensor(Linear(128, 10)) # MNIST output is 10 classes
def forward(self, x):
x.data = x.data.reshape((-1, 1, 28, 28)) # get however many number of imgs in batch
x = x.conv2d(self.c1).relu() # pass through conv first
x = x.reshape(shape=(x.shape[0], -1))
return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
So there are two quick examples to get you up and running. You might have noticed some operations like reshape and were wondering what else you can do with froog. We have many more operations that you can apply on tensors:
.add().sub().mul().sum().pow().dot().relu().sigmoid().reshape().pad2d().logsoftmax().conv2d().im2col2dconv().max_pool2d().avg_pool2d()
GPU Support
Have a GPU and need a speedup? You're in good luck because we have GPU support via OpenCL for our operations defined in ops_gpu.py.
Here's how you can send data to the GPU during a forward pass and bring it back to the CPU.
# ...
GPU = os.getenv("GPU", None) is not None
if GPU:
out = model.forward(Tensor(img).to_gpu()).cpu()
EfficientNet in froog!
We have a really cool finished implementation of EfficientNet built entirely in froog!
In order to run EfficientNet inference:
VIZ=1 python3 models/efficientnet.py <https://put_your_image_url_here>
I would recommend checking out the code, it's highly documented and pretty cool.
Contributing
Pull requests will be merged if they:
- increase simplicity
- increase functionality
- increase efficiency
More info on contributing. Make sure to run python -m pytest before creating a PR.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file froog-0.4.2.tar.gz.
File metadata
- Download URL: froog-0.4.2.tar.gz
- Upload date:
- Size: 27.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
86ee51498cd910a04382292dcce9355ded5d5268fc5befd28295b09cc5689696
|
|
| MD5 |
620bd46a7b1951e4b46d688baf2817b7
|
|
| BLAKE2b-256 |
a21074bf68e49dd13e94ca74db2462640c51fada1603cdc342d2b07b9f0c0dc1
|
File details
Details for the file froog-0.4.2-py3-none-any.whl.
File metadata
- Download URL: froog-0.4.2-py3-none-any.whl
- Upload date:
- Size: 22.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2895198d6be4f68b9dc47e4a8849ab2019703377ff0d145174e583130a5500c8
|
|
| MD5 |
13d8d0186c09ee98b6fe48dac29be60e
|
|
| BLAKE2b-256 |
2c7768decdbd8a743a4f62d56d1526612a33b2ac4abed23142ff2877502b8461
|