A From Scratch Neural Network Framework with Educational Purposes
Project description
forgeNN
Table of Contents
- Installation
- Overview
- Performance vs PyTorch
- Quick Start
- Architecture
- Performance
- Complete Example
- TODO List
- Contributing
- Acknowledgments
Installation
pip install forgeNN
Overview
forgeNN is a modern neural network framework that is developed by a solo developer learning about ML. Features vectorized operations for high-speed training.
Key Features
- Vectorized Operations: NumPy-powered batch processing (100x+ speedup)
- Dynamic Computation Graphs: Automatic differentiation with gradient tracking
- Complete Neural Networks: From simple neurons to complex architectures
- Production Loss Functions: Cross-entropy, MSE with numerical stability
Performance vs PyTorch
forgeNN is 3.52x faster than PyTorch on small models!
| Metric | PyTorch | forgeNN | Advantage |
|---|---|---|---|
| Training Time (MNIST) | 64.72s | 30.84s | 2.10x faster |
| Test Accuracy | 97.30% | 97.37% | +0.07% better |
| Small Models (<109k params) | Baseline | 3.52x faster | Massive speedup |
📊 See Full Comparison Guide for detailed benchmarks, syntax differences, and when to use each framework.
Quick Start
High-Performance Training
import forgeNN
from sklearn.datasets import make_classification
# Generate dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3)
# Create vectorized model
model = forgeNN.VectorizedMLP(20, [64, 32], 3)
optimizer = forgeNN.VectorizedOptimizer(model.parameters(), lr=0.01)
# Fast batch training
for epoch in range(10):
# Convert to tensors
x_batch = forgeNN.Tensor(X)
# Forward pass
logits = model(x_batch)
loss = forgeNN.cross_entropy_loss(logits, y)
# Backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc = forgeNN.accuracy(logits, y)
print(f"Epoch {epoch}: Loss = {loss.data:.4f}, Acc = {acc*100:.1f}%")
Architecture
- Main API:
forgeNN.Tensor,forgeNN.VectorizedMLP(production use) - Legacy API:
forgeNN.legacy.*(educational purposes) - Functions: Complete activation and loss function library
- Examples:
example.py- Complete MNIST classification demo
Performance
| Implementation | Speed | MNIST Accuracy |
|---|---|---|
| Vectorized | 38,000+ samples/sec | 93%+ in <2s |
Highlights:
- 100x+ speedup over scalar implementations
- Production-ready performance with educational clarity
- Memory efficient vectorized operations
Complete Example
See example.py for a full MNIST classification demo achieving professional results.
Links
- PyPI Package: https://pypi.org/project/forgeNN/
- Documentation: See guides in this repository
- Issues: GitHub Issues for bug reports and feature requests
TODO List
Based on comprehensive comparison with PyTorch and NumPy:
CRITICAL MISSING FEATURES (High Priority):
-
TENSOR SHAPE OPERATIONS:
reshape(): Change tensor dimensions (tensor.reshape(2, -1)) - COMPLETEDtranspose(): Swap dimensions (tensor.transpose(0, 1)) - COMPLETEDview(): Memory-efficient reshape (tensor.view(-1, 5)) - COMPLETEDflatten(): Convert to 1D (tensor.flatten()) - COMPLETEDsqueeze(): Remove size-1 dims (tensor.squeeze()) - COMPLETEDunsqueeze(): Add size-1 dims (tensor.unsqueeze(0)) - COMPLETED
-
MATRIX OPERATIONS:
matmul()/@: Matrix multiplication with broadcasting - COMPLETEDdot(): Vector dot product
-
TENSOR COMBINATION:
cat(): Join along existing dim (torch.cat([a, b], dim=0))stack(): Join along new dim (torch.stack([a, b]))
IMPORTANT FEATURES (Medium Priority):
-
ADVANCED ACTIVATIONS:
lrelu(): AVAILABLE asforgeNN.functions.activation.LRELU(needs fixing)swish(): AVAILABLE asforgeNN.functions.activation.SWISH(needs fixing)gelu(): Gaussian Error Linear Unit (missing)elu(): Exponential Linear Unit (missing)
-
TENSOR UTILITIES:
split(): Split into chunkschunk(): Split into equal piecespermute(): Rearrange dimensionscontiguous(): Make tensor memory-contiguous (tensor.contiguous()) - COMPLETED
-
INDEXING:
- Boolean indexing:
tensor[tensor > 0] - Fancy indexing:
tensor[indices] gather(): Select along dimension
- Boolean indexing:
NICE-TO-HAVE (Lower Priority):
-
LINEAR ALGEBRA:
norm(): Vector/matrix normsdet(): Matrix determinantinverse(): Matrix inverse
-
CONVENIENCE:
clone(): Deep copydetach(): Remove from computation graphrequires_grad_(): In-place grad requirement change
-
INFRASTRUCTURE:
- Better error messages for shape mismatches
- Memory-efficient operations
- API consistency improvements
- Comprehensive documentation
PRIORITY ORDER:
- Shape operations (reshape, transpose, flatten)
- Matrix multiplication (matmul, @)
- Tensor combination (cat, stack)
- More activations (leaky_relu, gelu)
- Documentation and error handling
Contributing
I am not currently accepting contributions, but I'm always open to suggestions and feedback!
Acknowledgments
- Inspired by educational automatic differentiation tutorials
- Built for both learning and production use
- Optimized with modern NumPy practices
- Available on PyPI:
pip install forgeNN
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file forgenn-1.0.4.tar.gz.
File metadata
- Download URL: forgenn-1.0.4.tar.gz
- Upload date:
- Size: 42.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7058011022d6069d4fa2c468103eba309fae0834d6b059fdbefbfc31d0add8db
|
|
| MD5 |
87d1425f888d966db202d0f31cce5977
|
|
| BLAKE2b-256 |
b9145435edcfb0c061d9d0111db9977b0f25ca56bbd85ad4a664f0aa9d2d471a
|
File details
Details for the file forgenn-1.0.4-py3-none-any.whl.
File metadata
- Download URL: forgenn-1.0.4-py3-none-any.whl
- Upload date:
- Size: 30.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e4ebdcf5aebd70d42b15284bc62556e388d83928d82eab0e24e64ff20f86d517
|
|
| MD5 |
97c0d75833f6b6ef08bd65c450bb1ada
|
|
| BLAKE2b-256 |
0a3e64fca64b928f50f341b3c968dadac5472801c5e263c07cfabfa0d7054a1c
|