A From Scratch Neural Network Framework with Educational Purposes
Project description
forgeNN
Table of Contents
- Installation
- Overview
- Performance vs PyTorch
- Quick Start
- Complete Example
- Roadmap
- Contributing
- Acknowledgments
Installation
pip install forgeNN
Optional extras:
# ONNX helpers (scaffold)
pip install "forgeNN[onnx]"
# CUDA backend (scaffold; requires compatible GPU/driver)
pip install "forgeNN[cuda]"
Overview
forgeNN is a modern neural network framework with an API built around a straightforward Sequential model, a fast NumPy autograd Tensor, and a Keras-like compile/fit training workflow.
This project is built and maintained by a single student developer. For background and portfolio/CV, see: https://savern.me
Key Features
- Fast NumPy core: Vectorized operations with fused, stable math
- Dynamic Computation Graphs: Automatic differentiation with gradient tracking
- Complete Neural Networks: From simple neurons to complex architectures
- Production Loss Functions: Cross-entropy, MSE with numerical stability
- Scaffolded Integrations: Runtime device API for future CUDA; ONNX export/import stubs
Performance vs PyTorch
forgeNN is 3.52x faster than PyTorch on small models!
| Metric | PyTorch | forgeNN | Advantage |
|---|---|---|---|
| Training Time (MNIST) | 64.72s | 30.84s | 2.10x faster |
| Test Accuracy | 97.30% | 97.37% | +0.07% better |
| Small Models (<109k params) | Baseline | 3.52x faster | Massive speedup |
📊 Comparison and detailed docs are being refreshed for v2; see examples/ for runnable demos.
Quick Start
Keras-like Training (compile/fit)
model = fnn.Sequential([
fnn.Input((20,)), # optional Input layer seeds summary & shapes
fnn.Dense(64) @ 'relu',
fnn.Dense(32) @ 'relu',
fnn.Dense(3) @ 'linear'
])
# Optionally inspect architecture
model.summary() # or model.summary((20,)) if no Input layer
opt = fnn.Adam(lr=1e-3) # or other optimizers (adamw, sgd, etc)
compiled = fnn.compile(model,
optimizer=opt,
loss='cross_entropy',
metrics=['accuracy'])
compiled.fit(X, y, epochs=10, batch_size=64)
loss, metrics = compiled.evaluate(X, y)
# Tip: `mse` auto-detects 1D integer class labels for (N,C) logits and one-hot encodes internally.
# model.summary() can be called any time after construction if an Input layer or input_shape is provided.
Complete Example
See examples/ for full fledged demos
Links
- PyPI Package: https://pypi.org/project/forgeNN/
- Documentation: v2 guides coming soon; examples in
examples/ - Issues: GitHub Issues for bug reports and feature requests
- Portfolio/CV: https://savern.me
Roadmap (post v2.0.0)
-
CUDA backend and device runtime
- Device abstraction for
Tensorand layers - Initial CUDA kernels (Conv, GEMM, elementwise) and CPU/CUDA parity tests
- Setup and troubleshooting guide
- Device abstraction for
-
ONNX: export and import (full coverage for the core API)
- Export
Sequentialgraphs with Conv/Pool/Flatten/Dense/LayerNorm/Dropout/activations - Import linear and branched graphs where feasible; shape inference checks
- Round‑trip parity tests and examples
- Export
-
Model save and load
- Architecture JSON + weights (NPZ) format
state_dict/load_state_dictcompatibility helpers- Versioning and minimal migration guidance
-
Transformer positional encodings
- Sinusoidal
PositionalEncodingand learnablePositionalEmbedding - Tiny encoder demo with text classification walkthrough
- Sinusoidal
-
Performance and stability
- CPU optimizations for conv/pool paths, memory reuse, and fewer allocations
- Threading guidance (MKL/OpenBLAS), deterministic runs, and profiling notes
-
Documentation
- Practical guides for
Sequential,compile/fit, model I/O, ONNX, and CUDA setup - Design overview of autograd and execution model
- Practical guides for
Contributing
I am not currently accepting contributions, but I'm always open to suggestions and feedback!
Acknowledgments
- Inspired by educational automatic differentiation tutorials (micrograd)
- Built for both learning and production use
- Optimized with modern NumPy practices
- Available on PyPI:
pip install forgeNN
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file forgenn-2.0.0.tar.gz.
File metadata
- Download URL: forgenn-2.0.0.tar.gz
- Upload date:
- Size: 48.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1fbf5301d067e8ed5aa0d2618ad14887f010a79305373599ad7c7dc527b4cd51
|
|
| MD5 |
22ae9cce91a163ec41bfcfc80634de54
|
|
| BLAKE2b-256 |
d309b0af40775edb4f35274a44073381ccd79cf3ed6304f535c49e7ef00ac701
|
File details
Details for the file forgenn-2.0.0-py3-none-any.whl.
File metadata
- Download URL: forgenn-2.0.0-py3-none-any.whl
- Upload date:
- Size: 44.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d7995bc9ce9a5ec1e466db21d557890390658e04b4ea986103e4f0c5e9be8858
|
|
| MD5 |
6a9bf4bcd17530cb7da5b38d94d1746d
|
|
| BLAKE2b-256 |
9d3ca9afc101c422fdafe4a32249146fb4e1ac87f447a711b2990a6767ce0397
|