A lightweight deep learning framework from scratch
Project description
gradzero
A lightweight deep learning framework built from scratch with automatic differentiation.
Features
- Automatic Differentiation: Built-in autograd engine for computing gradients
- Tensor Operations: NumPy-based tensor with gradient tracking
- Neural Network Layers: Linear, ReLU, Sigmoid, and more
- Optimizers: SGD and Adam optimizers with momentum and weight decay support
- Easy to Use: Simple API similar to PyTorch
Installation
pip install gradzero
Quick Start
Creating Tensors with Factory Methods (实例化方法)
import gradzero as gz
# Create tensors using class method factories (实例化方法)
zeros = gz.Tensor.zeros((3, 3)) # Tensor filled with zeros
ones = gz.Tensor.ones((2, 4)) # Tensor filled with ones
randn = gz.Tensor.randn((3, 3)) # Random from normal distribution
rand = gz.Tensor.rand((2, 2)) # Random from uniform distribution
from_np = gz.Tensor.from_numpy(my_array) # From existing numpy array
# With gradient tracking
trainable = gz.Tensor.randn((3, 3), requires_grad=True)
Building a Neural Network
import gradzero as gz
# Define a model
model = gz.Sequential(
gz.Linear(784, 128),
gz.ReLU(),
gz.Linear(128, 10),
)
# Loss and optimizer
criterion = gz.CrossEntropyLoss()
optimizer = gz.Adam(model.parameters(), lr=0.001)
# Training loop
for epoch in range(100):
# Forward pass
output = model(input_tensor)
loss = criterion(output, target_tensor)
# Backward pass
model.zero_grad()
loss.backward()
# Update parameters
optimizer.step()
API Reference
Tensor Factory Methods (实例化方法)
Tensor.zeros(shape, dtype=None, requires_grad=False)- Create a tensor filled with zerosTensor.ones(shape, dtype=None, requires_grad=False)- Create a tensor filled with onesTensor.randn(shape, dtype=None, requires_grad=False)- Create a tensor with random values from standard normal distributionTensor.rand(shape, dtype=None, requires_grad=False)- Create a tensor with random values from uniform distribution [0, 1)Tensor.from_numpy(array, requires_grad=False)- Create a tensor from a numpy array
Neural Network Layers
Linear(in_features, out_features, bias=True)- Fully connected layerReLU()- ReLU activationSigmoid()- Sigmoid activationSequential(*layers)- Container for sequential layers
Loss Functions
MSELoss()- Mean squared errorCrossEntropyLoss()- Cross entropy loss for classification
Optimizers
SGD(parameters, lr=0.01, momentum=0.0, weight_decay=0.0)- Stochastic gradient descentAdam(parameters, lr=0.001, betas=(0.9, 0.999), eps=1e-8, weight_decay=0.0)- Adam optimizer
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
gradzero-0.1.0.tar.gz
(7.7 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gradzero-0.1.0.tar.gz.
File metadata
- Download URL: gradzero-0.1.0.tar.gz
- Upload date:
- Size: 7.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.0 CPython/3.12.12 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ff7450cbe243495b24e5500529868b5217ae3a445ec9fa9f9720e430081ad575
|
|
| MD5 |
75c7fe626cb463d7860aa97304d67713
|
|
| BLAKE2b-256 |
6102e46f0f932b71a1a69271273435e4d21bb77877ee4f2de9c58339a51286ff
|
File details
Details for the file gradzero-0.1.0-py3-none-any.whl.
File metadata
- Download URL: gradzero-0.1.0-py3-none-any.whl
- Upload date:
- Size: 8.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.0 CPython/3.12.12 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5c72106660b4833eb76f0bb6a13ef353380a9934b17384e5593e8b2fb2b62eb5
|
|
| MD5 |
26a90b693470b3ac4daaee786bb08f79
|
|
| BLAKE2b-256 |
fd138fde1298b41557c0fea8d7763487e6204c87030b2056f44c1bc94ff70d37
|