A lightweight deep learning library for Apple's Neural Engine.
Project description
Energizer
A lightweight PyTorch-like deep learning library for Apple's Neural Engine.
Energizer provides a familiar, PyTorch-style API for building and training neural networks with first-class support for Apple Silicon via the MLX backend. It falls back to NumPy on CPU, making it suitable for prototyping on any platform.
Features
- Autograd — Automatic differentiation through a
Functiongraph, with.backward()on anyTensor. - Dual backend — CPU via NumPy, GPU via Apple MLX. Switch with
.to("gpu"). - PyTorch-like API —
Module,Parameter,Sequential,Optimizer— familiar patterns, zero friction. - Full layer library — Linear, Conv1d/2d, Transformer, Embedding, Normalization, Pooling, and more.
- Model serialization —
model.save()/Model.load()out of the box. - Lightweight — Pure Python, minimal dependencies (
numpy,mlx).
Installation
pip install energizer
For GPU acceleration on Apple Silicon:
pip install "energizer[gpu]"
For development:
pip install "energizer[dev]"
Requirements: Python 3.10, 3.11, or 3.12 (Maximum 3.12 required for coremltools support)
Quickstart
import energizer
# Build a model
model = energizer.Sequential(
energizer.Linear(784, 256),
energizer.ReLU(),
energizer.Dropout(p=0.3),
energizer.Linear(256, 10),
)
# Move to Apple Neural Engine
model.to("gpu")
# Forward pass
x = energizer.Tensor.randn(32, 784, device="gpu")
output = model(x)
# Loss + backward
loss_fn = energizer.CrossEntropyLoss()
target = energizer.Tensor.zeros((32,), device="gpu")
loss = loss_fn(output, target)
loss.backward()
# Optimizer step
optimizer = energizer.Adam(model.parameters(), lr=1e-3)
optimizer.step()
optimizer.zero_grad()
API Reference
Tensor
The core data structure. Wraps NumPy arrays on CPU and MLX arrays on GPU.
t = energizer.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)
# Creation helpers
energizer.Tensor.randn(3, 4)
energizer.Tensor.zeros((3, 4))
energizer.Tensor.ones((3, 4))
# Device transfer
t.to("gpu") # → Apple Neural Engine (MLX)
t.to("cpu") # → NumPy
# Supported operators
t + t | t - t | t * t | t / t
t @ t | t ** 2 | -t
t.sum() | t.mean() | t.T
t.reshape((4, 2)) | t.view((4, 2))
t.transpose(0, 1)
# Autograd
loss = (model(x) - target).mean()
loss.backward()
Module
Base class for all layers. Subclass it to define custom layers.
class MyLayer(energizer.Module):
def __init__(self):
super().__init__()
self.w = energizer.Parameter(energizer.Tensor.randn(4, 4))
def forward(self, x):
return x @ self.w
model.parameters() # list of trainable Parameters
model.to("gpu") # move all parameters to device
model.train() / .eval() # toggle training mode (affects Dropout, BatchNorm)
model.save("model.npz") # serialize to disk
model.load("model.npz") # restore from disk
Layers
Linear
energizer.Linear(in_features=128, out_features=64, bias=True)
Convolutional
energizer.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0)
energizer.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0)
energizer.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0)
Activation Functions
energizer.ReLU()
energizer.LeakyReLU(negative_slope=0.01)
energizer.Sigmoid()
energizer.GELU()
Normalization
energizer.BatchNorm1d(num_features)
energizer.BatchNorm2d(num_features)
energizer.LayerNorm(normalized_shape)
Pooling
energizer.MaxPool2d(kernel_size, stride=None, padding=0)
energizer.AvgPool2d(kernel_size, stride=None, padding=0)
Regularization
energizer.Dropout(p=0.5)
Shape Manipulation
energizer.Flatten(start_dim=1, end_dim=-1)
energizer.Reshape(shape)
energizer.Trim(start, end)
Containers
energizer.Sequential(*layers) # forward through layers in order
energizer.ModuleList([layer1, layer2]) # list of modules, no auto-forward
Residual Blocks
energizer.ResidualBlock(channels)
energizer.BottleneckBlock(in_channels, out_channels)
Transformer
energizer.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1)
energizer.TransformerEncoder(encoder_layer, num_layers)
Embedding
energizer.Embedding(num_embeddings, embedding_dim)
AutoEncoder
energizer.AutoEncoder(device="cpu") # pre-configured convolutional autoencoder
Loss Functions
energizer.MSELoss(reduction="mean")
energizer.CrossEntropyLoss(reduction="mean")
Optimizers
SGD
energizer.SGD(
model.parameters(),
lr=0.01,
momentum=0.9,
weight_decay=1e-4,
nesterov=False,
)
Adam
energizer.Adam(
model.parameters(),
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=0,
amsgrad=False,
)
Functional API
from energizer import functionnal as F
F.max(tensor, floor=0.0) # element-wise max with a floor
F.as_strided(tensor, shape, strides) # strided view of a tensor
F.trace(tensor) # trace of a 2D matrix
Training Loop Example
import energizer
model = energizer.Sequential(energizer.Linear(4, 8), energizer.ReLU(), energizer.Linear(8, 1))
optimizer = energizer.Adam(model.parameters(), lr=1e-3)
loss_fn = energizer.MSELoss()
model.train()
for epoch in range(100):
optimizer.zero_grad()
x = energizer.Tensor.randn(16, 4)
target = energizer.Tensor.zeros((16, 1))
output = model(x)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
if epoch % 10 == 0:
print(f"Epoch {epoch:3d} | Loss: {loss.item():.4f}")
Roadmap
Layers
- Softmax activation
- Huber Loss
Infrastructure
- GPU autograd pass (MLX-native backward)
- Mixed precision training
-
DataLoader/Datasetabstractions
Contributing
Pull requests are welcome. Please make sure your code is formatted with Black before submitting — the CI will enforce it:
black energizer/ tests/ src/
License
MIT — see LICENSE.
Author
Florian GRIMA — florian.grima@epitech.eu
GitHub · PyPI · Issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file energizer-0.1.7.tar.gz.
File metadata
- Download URL: energizer-0.1.7.tar.gz
- Upload date:
- Size: 57.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c06d99a90a11dce69a0f58aaf9e415f2f03c4694e0e844f2e65f3c066acaa7b5
|
|
| MD5 |
cff2ff01fe9d456c1c59ba487ee080be
|
|
| BLAKE2b-256 |
6b654e2f0dfa80168c744daab39fc1dc2131e141ae1a477e25beb7c7fc67e9bc
|
Provenance
The following attestation bundles were made for energizer-0.1.7.tar.gz:
Publisher:
publish.yml on energizer-ml/energizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
energizer-0.1.7.tar.gz -
Subject digest:
c06d99a90a11dce69a0f58aaf9e415f2f03c4694e0e844f2e65f3c066acaa7b5 - Sigstore transparency entry: 1199743306
- Sigstore integration time:
-
Permalink:
energizer-ml/energizer@9af668706e02d216feb48bceec48485e252835ca -
Branch / Tag:
refs/tags/v0.1.7 - Owner: https://github.com/energizer-ml
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9af668706e02d216feb48bceec48485e252835ca -
Trigger Event:
release
-
Statement type:
File details
Details for the file energizer-0.1.7-py3-none-any.whl.
File metadata
- Download URL: energizer-0.1.7-py3-none-any.whl
- Upload date:
- Size: 59.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9dc102e48f5aa0c783e97b1b304a107aa9b45cd362b1e26490f7cb10a67d4793
|
|
| MD5 |
0f8407b9d07cb58aea39989e15122f7a
|
|
| BLAKE2b-256 |
f03c617334812b427036dd02ca19008a78cae604b66e97dda6c076b1fb2b12d9
|
Provenance
The following attestation bundles were made for energizer-0.1.7-py3-none-any.whl:
Publisher:
publish.yml on energizer-ml/energizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
energizer-0.1.7-py3-none-any.whl -
Subject digest:
9dc102e48f5aa0c783e97b1b304a107aa9b45cd362b1e26490f7cb10a67d4793 - Sigstore transparency entry: 1199743311
- Sigstore integration time:
-
Permalink:
energizer-ml/energizer@9af668706e02d216feb48bceec48485e252835ca -
Branch / Tag:
refs/tags/v0.1.7 - Owner: https://github.com/energizer-ml
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9af668706e02d216feb48bceec48485e252835ca -
Trigger Event:
release
-
Statement type: