Skip to main content

tensor library with opencl and metal support

Project description

froog unit test badge num downloads badge

froog the frog
froog: a gpu accelerated tensor library
homepage | documentation | pip

froog is an easy-to-read tensor library (27k pip installs!) with support for GPU acceleration with OpenCL and Apple Metal. Inspired by tinygrad, and micrograd.

Installation

pip install froog

Features

Quick Example

Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?

from froog.tensor import Tensor
from froog.nn import Linear
import froog.optim as optim

class mnistMLP:
  def __init__(self):
    self.l1 = Tensor(Linear(784, 128)) # layer 1
    self.l2 = Tensor(Linear(128, 10))  # layer 2

  def forward(self, x):
    # forward pass through both layers and softmax for output probabilities
    return x.dot(self.l1).relu().dot(self.l2).logsoftmax() 

model = mnistMLP() # create model
optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer

GPU Support

Device management is handled transparently and will automatically select one of [METAL, OPENCL, CPU]. To use the GPU:

from froog.tensor import Tensor
from froog import get_device
# Check if GPU is available
has_gpu = get_device() is not None and get_device().name != "CPU"
# Create a tensor
x = Tensor([1, 2, 3])
# Push to GPU if available
if has_gpu: x = x.to_gpu()
# Operations run on GPU automatically
y = x + x
z = y * y
# Bring back to CPU when needed
result = z.to_cpu()
print(result.data)

You can also check what devices are available:

from froog import get_available_devices
available_devices = get_available_devices()
print(f"Available devices: {available_devices}")

Or set a specific device:

from froog import set_device
set_device("METAL")  # or "OPENCL"

EfficientNet in froog!

pug

We have an implementation of EfficientNet v2 built entirely in froog using the official PyTorch weights! Running inference on this pug...

python3 models/efficientnet.py <https://optional_image_url>

***********output*************
inference 4.34 s

imagenet class: 254
prediction    : pug, pug-dog
probability   : 0.9402361
******************************

I would recommend checking out the code, it's highly documented and pretty cool.

API

MATH

  • .add(y) - Addition with y
  • .sub(y) - Subtraction with y
  • .mul(y) - Multiplication with y
  • .div(y) - Division by y
  • .pow(y) - Power function (raise to power y)
  • .sum() - Sum all elements
  • .mean() - Mean of all elements
  • .sqrt() - Square root
  • .dot(y) - Matrix multiplication with y
  • .matmul(y) - Alias for dot

MACHINE LEARNING

  • .relu() - Rectified Linear Unit activation
  • .sigmoid() - Sigmoid activation
  • .dropout(p=0.5, training=True) - Dropout regularization
  • .logsoftmax() - Log softmax function
  • .swish() - Swish activation function (x * sigmoid(x))
  • .conv2d(w, stride=1, groups=1) - 2D convolution
  • .im2col2dconv(w) - Image to column for convolution
  • .max_pool2d(kernel_size=(2,2)) - 2D max pooling
  • .avg_pool2d(kernel_size=(2,2)) - 2D average pooling

TENSOR

  • Tensor.zeros(*shape) - Create tensor of zeros
  • Tensor.ones(*shape) - Create tensor of ones
  • Tensor.randn(*shape) - Create tensor with random normal values
  • Tensor.eye(dim) - Create identity matrix
  • Tensor.arange(start, stop=None, step=1) - Create tensor with evenly spaced values

TENSOR PROPERTIES

  • .shape - The shape of the tensor as a tuple
  • .size - Total number of elements in the tensor
  • .ndim - Number of dimensions (rank) of the tensor
  • .transpose - Transpose of the tensor
  • .dtype - Data type of the tensor
  • .is_gpu - Whether tensor is on GPU
  • .grad - Gradient of tensor with respect to some scalar value
  • .data - Underlying NumPy array (or GPU buffer)
  • .to_float() - Converts tensor to float32 data type
  • .to_int() - Converts tensor to int32 data type
  • .to_bool() - Converts tensor to boolean data type
  • .reshape(*shape) - Change tensor shape
  • .view(*shape) - Alternative to reshape
  • .pad2d(padding=None) - Pad 2D tensors
  • .flatten() - Returns a flattened 1D copy of the tensor
  • .unsqueeze(dim) - Add dimension of size 1 at specified position
  • .squeeze(dim=None) - Remove dimensions of size 1
  • .detach() - Returns a tensor detached from computation graph
  • .assign(x) - Assign values from tensor x to this tensor

GPU

  • .to_cpu() - Moves tensor to CPU
  • .to_gpu() - Moves tensor to GPU
  • .gpu_() - In-place GPU conversion (modifies tensor)

AUTOGRAD

  • .backward(allow_fill=True) - Performs backpropagation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

froog-0.5.2.tar.gz (18.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

froog-0.5.2-py3-none-any.whl (17.9 kB view details)

Uploaded Python 3

File details

Details for the file froog-0.5.2.tar.gz.

File metadata

  • Download URL: froog-0.5.2.tar.gz
  • Upload date:
  • Size: 18.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for froog-0.5.2.tar.gz
Algorithm Hash digest
SHA256 58d10755bc087cf57f14082e5fff2bb4c7e7e9076d4397ff5dbbbde5aba1dfe2
MD5 9664a3c3062cea5226d3049a229bc991
BLAKE2b-256 99fc4c8b7a2f903411aecbbf00e0722e86e4adafdf4f8024a537dbd2b77fa348

See more details on using hashes here.

File details

Details for the file froog-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: froog-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 17.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for froog-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 96aca82cfae9010082e2fb1eb6cd47625f911e8aac5f7c776b340a6a52b76176
MD5 bba2514e82726b1c3139a8b82a8bae4c
BLAKE2b-256 bdc5c8c63c5b34e5273a615e22e5433254050eb975c747a9ae856602433acb81

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page