Skip to main content

A lightweight deep learning library for Apple's Neural Engine.

Project description

Energizer

A lightweight PyTorch-like deep learning library for Apple's Neural Engine.

PyPI version Python License: MIT Lint

Energizer provides a familiar, PyTorch-style API for building and training neural networks with first-class support for Apple Silicon via the MLX backend. It falls back to NumPy on CPU, making it suitable for prototyping on any platform.


Features

  • Autograd — Automatic differentiation through a Function graph, with .backward() on any Tensor.
  • Dual backend — CPU via NumPy, GPU via Apple MLX. Switch with .to("gpu").
  • PyTorch-like APIModule, Parameter, Sequential, Optimizer — familiar patterns, zero friction.
  • Full layer library — Linear, Conv1d/2d, Transformer, Embedding, Normalization, Pooling, and more.
  • Model serializationmodel.save() / Model.load() out of the box.
  • Lightweight — Pure Python, minimal dependencies (numpy, mlx).

Installation

pip install energizer

For GPU acceleration on Apple Silicon:

pip install "energizer[gpu]"

For development:

pip install "energizer[dev]"

Requirements: Python 3.10, 3.11, or 3.12 (Maximum 3.12 required for coremltools support)


Quickstart

import energizer

# Build a model
model = energizer.Sequential(
    energizer.Linear(784, 256),
    energizer.ReLU(),
    energizer.Dropout(p=0.3),
    energizer.Linear(256, 10),
)

# Move to Apple Neural Engine
model.to("gpu")

# Forward pass
x = energizer.Tensor.randn(32, 784, device="gpu")
output = model(x)

# Loss + backward
loss_fn = energizer.CrossEntropyLoss()
target  = energizer.Tensor.zeros((32,), device="gpu")
loss    = loss_fn(output, target)
loss.backward()

# Optimizer step
optimizer = energizer.Adam(model.parameters(), lr=1e-3)
optimizer.step()
optimizer.zero_grad()

API Reference

Tensor

The core data structure. Wraps NumPy arrays on CPU and MLX arrays on GPU.

t = energizer.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)

# Creation helpers
energizer.Tensor.randn(3, 4)
energizer.Tensor.zeros((3, 4))
energizer.Tensor.ones((3, 4))

# Device transfer
t.to("gpu")   # → Apple Neural Engine (MLX)
t.to("cpu")   # → NumPy

# Supported operators
t + t  |  t - t  |  t * t  |  t / t
t @ t  |  t ** 2 |  -t
t.sum()  |  t.mean()  |  t.T
t.reshape((4, 2))  |  t.view((4, 2))
t.transpose(0, 1)

# Autograd
loss = (model(x) - target).mean()
loss.backward()

Module

Base class for all layers. Subclass it to define custom layers.

class MyLayer(energizer.Module):
    def __init__(self):
        super().__init__()
        self.w = energizer.Parameter(energizer.Tensor.randn(4, 4))

    def forward(self, x):
        return x @ self.w

model.parameters()        # list of trainable Parameters
model.to("gpu")           # move all parameters to device
model.train() / .eval()   # toggle training mode (affects Dropout, BatchNorm)
model.save("model.npz")   # serialize to disk
model.load("model.npz")   # restore from disk

Layers

Linear

energizer.Linear(in_features=128, out_features=64, bias=True)

Convolutional

energizer.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0)
energizer.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0)
energizer.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0)

Activation Functions

energizer.ReLU()
energizer.LeakyReLU(negative_slope=0.01)
energizer.Sigmoid()
energizer.GELU()

Normalization

energizer.BatchNorm1d(num_features)
energizer.BatchNorm2d(num_features)
energizer.LayerNorm(normalized_shape)

Pooling

energizer.MaxPool2d(kernel_size, stride=None, padding=0)
energizer.AvgPool2d(kernel_size, stride=None, padding=0)

Regularization

energizer.Dropout(p=0.5)

Shape Manipulation

energizer.Flatten(start_dim=1, end_dim=-1)
energizer.Reshape(shape)
energizer.Trim(start, end)

Containers

energizer.Sequential(*layers)          # forward through layers in order
energizer.ModuleList([layer1, layer2]) # list of modules, no auto-forward

Residual Blocks

energizer.ResidualBlock(channels)
energizer.BottleneckBlock(in_channels, out_channels)

Transformer

energizer.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1)
energizer.TransformerEncoder(encoder_layer, num_layers)

Embedding

energizer.Embedding(num_embeddings, embedding_dim)

AutoEncoder

energizer.AutoEncoder(device="cpu")   # pre-configured convolutional autoencoder

Loss Functions

energizer.MSELoss(reduction="mean")
energizer.CrossEntropyLoss(reduction="mean")

Optimizers

SGD

energizer.SGD(
    model.parameters(),
    lr=0.01,
    momentum=0.9,
    weight_decay=1e-4,
    nesterov=False,
)

Adam

energizer.Adam(
    model.parameters(),
    lr=1e-3,
    betas=(0.9, 0.999),
    eps=1e-8,
    weight_decay=0,
    amsgrad=False,
)

Functional API

from energizer import functionnal as F

F.max(tensor, floor=0.0)                     # element-wise max with a floor
F.as_strided(tensor, shape, strides)         # strided view of a tensor
F.trace(tensor)                              # trace of a 2D matrix

Training Loop Example

import energizer

model     = energizer.Sequential(energizer.Linear(4, 8), energizer.ReLU(), energizer.Linear(8, 1))
optimizer = energizer.Adam(model.parameters(), lr=1e-3)
loss_fn   = energizer.MSELoss()

model.train()
for epoch in range(100):
    optimizer.zero_grad()

    x      = energizer.Tensor.randn(16, 4)
    target = energizer.Tensor.zeros((16, 1))

    output = model(x)
    loss   = loss_fn(output, target)
    loss.backward()
    optimizer.step()

    if epoch % 10 == 0:
        print(f"Epoch {epoch:3d} | Loss: {loss.item():.4f}")

Roadmap

Layers

  • Softmax activation
  • Huber Loss

Infrastructure

  • GPU autograd pass (MLX-native backward)
  • Mixed precision training
  • DataLoader / Dataset abstractions

Contributing

Pull requests are welcome. Please make sure your code is formatted with Black before submitting — the CI will enforce it:

black energizer/ tests/ src/

License

MIT — see LICENSE.


Author

Florian GRIMAflorian.grima@epitech.eu
GitHub · PyPI · Issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

energizer-0.1.7.tar.gz (57.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

energizer-0.1.7-py3-none-any.whl (59.1 kB view details)

Uploaded Python 3

File details

Details for the file energizer-0.1.7.tar.gz.

File metadata

  • Download URL: energizer-0.1.7.tar.gz
  • Upload date:
  • Size: 57.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for energizer-0.1.7.tar.gz
Algorithm Hash digest
SHA256 c06d99a90a11dce69a0f58aaf9e415f2f03c4694e0e844f2e65f3c066acaa7b5
MD5 cff2ff01fe9d456c1c59ba487ee080be
BLAKE2b-256 6b654e2f0dfa80168c744daab39fc1dc2131e141ae1a477e25beb7c7fc67e9bc

See more details on using hashes here.

Provenance

The following attestation bundles were made for energizer-0.1.7.tar.gz:

Publisher: publish.yml on energizer-ml/energizer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file energizer-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: energizer-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 59.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for energizer-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 9dc102e48f5aa0c783e97b1b304a107aa9b45cd362b1e26490f7cb10a67d4793
MD5 0f8407b9d07cb58aea39989e15122f7a
BLAKE2b-256 f03c617334812b427036dd02ca19008a78cae604b66e97dda6c076b1fb2b12d9

See more details on using hashes here.

Provenance

The following attestation bundles were made for energizer-0.1.7-py3-none-any.whl:

Publisher: publish.yml on energizer-ml/energizer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page