Skip to main content

A From Scratch Neural Network Framework with Educational Purposes

Project description

forgeNN

Table of Contents

Python 3.8+ Stars NumPy PyPI version Downloads License

Installation

pip install forgeNN

Overview

forgeNN is a modern neural network framework that is developed by a solo developer learning about ML. Features vectorized operations for high-speed training.

Key Features

  • Vectorized Operations: NumPy-powered batch processing (100x+ speedup)
  • Dynamic Computation Graphs: Automatic differentiation with gradient tracking
  • Complete Neural Networks: From simple neurons to complex architectures
  • Production Loss Functions: Cross-entropy, MSE with numerical stability

Performance vs PyTorch

forgeNN is 3.52x faster than PyTorch on small models!

Metric PyTorch forgeNN Advantage
Training Time (MNIST) 64.72s 30.84s 2.10x faster
Test Accuracy 97.30% 97.37% +0.07% better
Small Models (<109k params) Baseline 3.52x faster Massive speedup

📊 See Full Comparison Guide for detailed benchmarks, syntax differences, and when to use each framework. Note: single-machine indicative results; not statistically rigorous multi-run averages.

Quick Start

High-Performance Training

import numpy as np
from sklearn.datasets import make_classification
from sklearn.preprocessing import StandardScaler
import forgeNN as fnn

# Generate dataset (reproducible)
X, y = make_classification(
    n_samples=1000,
    n_features=20,
    n_classes=3,
    n_informative=3,
    random_state=24
)

# Feature scaling helps optimization (especially with momentum/Adam)
X = StandardScaler().fit_transform(X).astype(np.float32)

# Tiny, didactic training loop (manual zero_grad/backward/step)
model = fnn.VectorizedMLP(20, [64, 32], 3)
optimizer = fnn.Adam(model.parameters(), lr=0.01)  # or: fnn.VectorizedOptimizer(..., momentum=0.9)

for epoch in range(30):
    logits = model(fnn.Tensor(X))
    loss = fnn.cross_entropy_loss(logits, y)
    optimizer.zero_grad(); loss.backward(); optimizer.step()
    acc = fnn.accuracy(logits, y)
    print(f"Epoch {epoch}: Loss = {loss.data:.4f}, Acc = {acc*100:.1f}%")

Keras-like Training (compile/fit)

model = fnn.Sequential([
    fnn.Input((20,)),        # optional Input layer seeds summary & shapes
    fnn.Dense(64) @ 'relu',
    fnn.Dense(32) @ 'relu',
    fnn.Dense(3)  @ 'linear'
])

# Optionally inspect architecture
model.summary()              # or model.summary((20,)) if no Input layer
opt = fnn.Adam(lr=1e-3)      # or other optimizers (adamw, sgd, etc)
compiled = fnn.compile(model,
                    optimizer=opt,
                    loss='cross_entropy',
                    metrics=['accuracy'])
compiled.fit(X, y, epochs=10, batch_size=64)
loss, metrics = compiled.evaluate(X, y)

# Tip: `mse` auto-detects 1D integer class labels for (N,C) logits and one-hot encodes internally.
# model.summary() can be called any time after construction if an Input layer or input_shape is provided.

Architecture

  • Main API: forgeNN, forgeNN.Tensor, forgeNN.Sequential, forgeNN.Input, forgeNN.VectorizedMLP
  • Model Introspection: model.summary() (Keras-like) with symbolic shape + parameter counts
  • Examples: Check examples/ for MNIST and more

Performance

Implementation Speed MNIST Accuracy
Vectorized 40,000+ samples/sec 95%+ in <1s
Sequential (with compile/fit) 40,000+ samples/sec 95%+ in <1.2s

Highlights:

  • 100x+ speedup over scalar implementations
  • Production-ready performance with educational clarity
  • Memory efficient vectorized operations
  • Smarter Losses: mse auto one-hot & reshape logic; fused stable cross-entropy

Complete Example

See examples/ for full fledged demos

Links

  • PyPI Package: https://pypi.org/project/forgeNN/
  • Documentation: See guides in this repository
  • Guides: SEQUENTIAL_GUIDE.md, TRAINING_GUIDE.md, COMPARISON_GUIDE.md
  • Issues: GitHub Issues for bug reports and feature requests

Roadmap

Before 2026 (2025 Remaining Milestones – ordered)

  1. Adam / AdamW 🗹 (Completed in v1.3.0)
  2. Dropout + LayerNorm 🗹 (Completed in v1.3.0)
  3. Model saving & loading (state dict + .npz) ☐
  4. Conv1D → Conv2D (naive) ☐
  5. Tiny Transformer example (encoder-only) ☐
  6. ONNX export (Sequential/Dense/Flatten/activations) then basic import (subset) ☐
  7. Documentation: serialization guide, ONNX guide, Transformer walkthrough ☐
  8. Parameter registry refinement ☐

Q1 2026 (Early 2026 Targets)

  • CUDA / GPU backend prototype (Tensor device abstraction)
  • Formal architecture & design documents (graph execution, autograd internals)
  • Expanded documentation site (narrative design + performance notes)

Items above may be reprioritized based on user feedback; GPU & design docs explicitly deferred to early 2026.

Contributing

I am not currently accepting contributions, but I'm always open to suggestions and feedback!

Acknowledgments

  • Inspired by educational automatic differentiation tutorials (micrograd)
  • Built for both learning and production use
  • Optimized with modern NumPy practices
  • Available on PyPI: pip install forgeNN

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

forgenn-1.3.0.tar.gz (53.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

forgenn-1.3.0-py3-none-any.whl (35.6 kB view details)

Uploaded Python 3

File details

Details for the file forgenn-1.3.0.tar.gz.

File metadata

  • Download URL: forgenn-1.3.0.tar.gz
  • Upload date:
  • Size: 53.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.3.0.tar.gz
Algorithm Hash digest
SHA256 b125f41af91103d03613cf8635c04e6ff5f0636c9ca4ff4df9e57d3fff33875e
MD5 07ad40be9578c866555e97d91d54e6ba
BLAKE2b-256 8a0bbe00a6292d3f87e3102384666f131d9221f07c89673c3fcffde1756a0b30

See more details on using hashes here.

File details

Details for the file forgenn-1.3.0-py3-none-any.whl.

File metadata

  • Download URL: forgenn-1.3.0-py3-none-any.whl
  • Upload date:
  • Size: 35.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0b8207a0e1f061c61955bce5f185d59f6dee3cfaaa137d870721728def6e223d
MD5 8b42a250309ad949a4369b3e23db1d1f
BLAKE2b-256 81097853be71cb51f93511c8cbc59e5ef35cb9b757c36f20010af56e856c6af0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page