Skip to main content

Noni — a tiny tensor library with autograd, for humans.

Project description

Noni (WIP)

A minimal tensor library with autograd flexible for building good enough deep learning models.

Familiar API

a = Tensor([[1., 2.], [3., 4.]], requires_grad=True)
b = Tensor([[0.5, -1.], [2., 0.]], requires_grad=True)

# Each op records its backward function
c = a * b        # op="*",  backward: dc/da = b, dc/db = a
d = c.sum()      # op="sum", backward: ones

d.backward()     # topological sort → apply each _backward in reverse

print(a.grad)    # dL/da = b.data = [[0.5, -1.], [2., 0.]]
print(b.grad)    # dL/db = a.data = [[1., 2.], [3., 4.]]

Common Modules for everything

from noni.nn import Linear, LayerNorm, MultiHeadAttention, CrossEntropyLoss

# A simple 2-layer MLP
W1 = Linear(784, 256)
W2 = Linear(256, 10)

x = Tensor(some_batch)
h = W1(x).relu()
logits = W2(h)

loss = CrossEntropyLoss()(logits, targets)
loss.backward()   # gradients in W1.weight.grad, W2.weight.grad etc.

Build your own

Noni has three built-in backends:

Backend Device tag Notes
NumPy cpu Always available, pure Python/NumPy
OpenCL opencl Cross-platform GPU (NVIDIA, AMD, Intel)
MLX mlx Apple Silicon GPU via Metal — recommended for M-series Macs

Move tensors and modules to any backend with .to():

from noni import Tensor
from noni.nn import Linear

# Apple Silicon — runs matmul through Metal Performance Shaders
lin = Linear(512, 256)
lin.to("mlx")
x = Tensor(data, device="mlx")
y = lin(x)

There is also work going on to support CUDA natively as well as Vulkan compute and Triton. You can always implement and register your own backend if you prefer.

from noni.backends import Backend, register_backend


class MyDevice(Backend):
	...

register_backend("mygpu", MyDevice())
Module Description
Linear Fully connected layer with weight + bias parameters, initialized using Kaiming initialization
Embedding Lookup table for token embeddings with scatter-add backward pass
LayerNorm Normalizes across the last N dimensions with learned affine parameters
Dropout Inverted dropout applied during training for regularization
MultiHeadAttention Multi-head self-attention module with optional causal mask for autoregressive models
FeedForward Position-wise feedforward network using GELU activation
TransformerBlock Pre-norm residual block combining Multi-Head Attention and FeedForward layers
CrossEntropyLoss Numerically stable implementation using log-softmax + negative log likelihood
Optimizers Includes SGD, Adam, AdamW, and CosineAnnealingLR scheduler

Building wheels

python -m build
twine upload dist/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

noniml-0.1.2.tar.gz (34.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

noniml-0.1.2-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file noniml-0.1.2.tar.gz.

File metadata

  • Download URL: noniml-0.1.2.tar.gz
  • Upload date:
  • Size: 34.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for noniml-0.1.2.tar.gz
Algorithm Hash digest
SHA256 de2b445968ae9290a59e9624641032b59a48c27a143f0335f848156a52b008ac
MD5 36af28e4aaca0235856f4da8f48d4c11
BLAKE2b-256 c47de710724275946b639c91dc7bb9e2e50a27ada8f2673efe7f99c4ceb2973f

See more details on using hashes here.

File details

Details for the file noniml-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: noniml-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 30.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for noniml-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5e3198241fb7159db568ef3ae42d2f4ce3c8f404407e9596431cf5bddc4ffe0b
MD5 43148742b1171fe5c90b64f2c17e3fa2
BLAKE2b-256 3aaf1f271b5462ddb3eaf8028074d4dceec79c147010de9c6135b3d29696f900

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page