A From Scratch Neural Network Framework with Educational Purposes
Project description
forgeNN
Table of Contents
- Installation
- Overview
- Performance vs PyTorch
- Quick Start
- Architecture
- Performance
- Complete Example
- Roadmap
- Contributing
- Acknowledgments
Installation
pip install forgeNN
Overview
forgeNN is a modern neural network framework that is developed by a solo developer learning about ML. Features vectorized operations for high-speed training.
Key Features
- Vectorized Operations: NumPy-powered batch processing (100x+ speedup)
- Dynamic Computation Graphs: Automatic differentiation with gradient tracking
- Complete Neural Networks: From simple neurons to complex architectures
- Production Loss Functions: Cross-entropy, MSE with numerical stability
Performance vs PyTorch
forgeNN is 3.52x faster than PyTorch on small models!
| Metric | PyTorch | forgeNN | Advantage |
|---|---|---|---|
| Training Time (MNIST) | 64.72s | 30.84s | 2.10x faster |
| Test Accuracy | 97.30% | 97.37% | +0.07% better |
| Small Models (<109k params) | Baseline | 3.52x faster | Massive speedup |
📊 See Full Comparison Guide for detailed benchmarks, syntax differences, and when to use each framework.
Quick Start
High-Performance Training
import forgeNN
from sklearn.datasets import make_classification
# Generate dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3)
# Create vectorized model
model = forgeNN.VectorizedMLP(20, [64, 32], 3)
optimizer = forgeNN.VectorizedOptimizer(model.parameters(), lr=0.01)
# Fast batch training
for epoch in range(10):
# Convert to tensors
x_batch = forgeNN.Tensor(X)
# Forward pass
logits = model(x_batch)
loss = forgeNN.cross_entropy_loss(logits, y)
# Backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc = forgeNN.accuracy(logits, y)
print(f"Epoch {epoch}: Loss = {loss.data:.4f}, Acc = {acc*100:.1f}%")
Keras-like Training (compile/fit)
import forgeNN as fnn
model = fnn.Sequential([
fnn.Input((20,)), # optional Input layer seeds summary & shapes
fnn.Dense(64) @ 'relu',
fnn.Dense(32) @ 'relu',
fnn.Dense(3) @ 'linear'
])
# Optionally inspect architecture
model.summary() # or model.summary((20,)) if no Input layer
compiled = fnn.compile(model, optimizer={"lr": 0.01, "momentum": 0.9},
loss='cross_entropy', metrics=['accuracy'])
compiled.fit(X, y, epochs=10, batch_size=64)
loss, metrics = compiled.evaluate(X, y)
# Tip: `mse` auto-detects 1D integer class labels for (N,C) logits and one-hot encodes internally.
# model.summary() can be called any time after construction if an Input layer or input_shape is provided.
Architecture
- Main API:
forgeNN,forgeNN.Tensor,forgeNN.Sequential,forgeNN.Input,forgeNN.VectorizedMLP - Model Introspection:
model.summary()(Keras-like) with symbolic shape + parameter counts - Examples: Check
examples/for MNIST and more
Performance
| Implementation | Speed | MNIST Accuracy |
|---|---|---|
| Vectorized | 40,000+ samples/sec | 95%+ in <1s |
| Sequential (with compile/fit) | 40,000+ samples/sec | 95%+ in <1.2s |
Highlights:
- 100x+ speedup over scalar implementations
- Production-ready performance with educational clarity
- Memory efficient vectorized operations
- Smarter Losses:
mseauto one-hot & reshape logic; fused stable cross-entropy
Complete Example
See examples/ for full fledged demos
Links
- PyPI Package: https://pypi.org/project/forgeNN/
- Documentation: See guides in this repository
- Guides: SEQUENTIAL_GUIDE.md, TRAINING_GUIDE.md, COMPARISON_GUIDE.md
- Issues: GitHub Issues for bug reports and feature requests
Roadmap
To be implemented
Contributing
I am not currently accepting contributions, but I'm always open to suggestions and feedback!
Acknowledgments
- Inspired by educational automatic differentiation tutorials (micrograd)
- Built for both learning and production use
- Optimized with modern NumPy practices
- Available on PyPI:
pip install forgeNN
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file forgenn-1.2.1.tar.gz.
File metadata
- Download URL: forgenn-1.2.1.tar.gz
- Upload date:
- Size: 48.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f446ca8b336cbfe9071262fee989769d1dd5041db64853dbcf3dbe043a345e6
|
|
| MD5 |
0fd85a6f3c91d1cef442a73f4b242c59
|
|
| BLAKE2b-256 |
6c0c23398d5f3ba3972caf8444a762cbe8d74a67fb9d00720d17509b9d010f3d
|
File details
Details for the file forgenn-1.2.1-py3-none-any.whl.
File metadata
- Download URL: forgenn-1.2.1-py3-none-any.whl
- Upload date:
- Size: 31.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7ba9d6d0987b2a669ddbae5fc7d879188cd6a2825a726b76977c5140ecaf347d
|
|
| MD5 |
c820073e0d47200beab8e66a69b90ef5
|
|
| BLAKE2b-256 |
0e29e17c5f5d707d4934759c1dd3ac09a8feadfcd2a74338a4b6272cba0a6d82
|