Skip to main content

A From Scratch Neural Network Framework with Educational Purposes

Project description

forgeNN

Table of Contents

Python 3.8+ NumPy PyPI version Downloads License

Installation

pip install forgeNN

Overview

forgeNN is a modern neural network framework that is developed by a solo developer learning about ML. Features vectorized operations for high-speed training.

Key Features

  • Vectorized Operations: NumPy-powered batch processing (100x+ speedup)
  • Dynamic Computation Graphs: Automatic differentiation with gradient tracking
  • Complete Neural Networks: From simple neurons to complex architectures
  • Production Loss Functions: Cross-entropy, MSE with numerical stability

Performance vs PyTorch

forgeNN is 3.52x faster than PyTorch on small models!

Metric PyTorch forgeNN Advantage
Training Time (MNIST) 64.72s 30.84s 2.10x faster
Test Accuracy 97.30% 97.37% +0.07% better
Small Models (<109k params) Baseline 3.52x faster Massive speedup

📊 See Full Comparison Guide for detailed benchmarks, syntax differences, and when to use each framework.

MNIST Benchmark Results

Quick Start

High-Performance Training

import forgeNN
from sklearn.datasets import make_classification

# Generate dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3)

# Create vectorized model  
model = forgeNN.VectorizedMLP(20, [64, 32], 3)
optimizer = forgeNN.VectorizedOptimizer(model.parameters(), lr=0.01)

# Fast batch training
for epoch in range(10):
    # Convert to tensors
    x_batch = forgeNN.Tensor(X)
    
    # Forward pass
    logits = model(x_batch)
    loss = forgeNN.cross_entropy_loss(logits, y)
    
    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    acc = forgeNN.accuracy(logits, y)
    print(f"Epoch {epoch}: Loss = {loss.data:.4f}, Acc = {acc*100:.1f}%")

Keras-like Training (compile/fit)

import forgeNN as fnn

model = fnn.Sequential([
   fnn.Dense(64) @ 'relu',
   fnn.Dense(32) @ 'relu',
   fnn.Dense(3)  @ 'linear'
])

# Initialize lazy params if needed
_ = model(fnn.Tensor([[0.0]*20]))

compiled = fnn.compile(model, optimizer={"lr": 0.01, "momentum": 0.9},
                  loss='cross_entropy', metrics=['accuracy'])
compiled.fit(X, y, epochs=10, batch_size=64)
loss, metrics = compiled.evaluate(X, y)

Architecture

  • Main API: forgeNN.Tensor, forgeNN.VectorizedMLP (high-performance neural networks)
  • Activation Functions: forgeNN.RELU, forgeNN.SWISH, etc. + string/callable support
  • Examples: example.py - Complete MNIST classification demo

Performance

Implementation Speed MNIST Accuracy
Vectorized 38,000+ samples/sec 93%+ in <2s

Highlights:

  • 100x+ speedup over scalar implementations
  • Production-ready performance with educational clarity
  • Memory efficient vectorized operations

Complete Example

See example.py for a full MNIST classification demo achieving professional results.

Links

  • PyPI Package: https://pypi.org/project/forgeNN/
  • Documentation: See guides in this repository
  • Guides: SEQUENTIAL_GUIDE.md, TRAINING_GUIDE.md, COMPARISON_GUIDE.md
  • Issues: GitHub Issues for bug reports and feature requests

TODO List

Based on comprehensive comparison with PyTorch and NumPy:

CRITICAL MISSING FEATURES (High Priority):

  1. TENSOR SHAPE OPERATIONS: - COMPLETED

    • reshape() : Change tensor dimensions (tensor.reshape(2, -1)) - COMPLETED
    • transpose() : Swap dimensions (tensor.transpose(0, 1)) - COMPLETED
    • view() : Memory-efficient reshape (tensor.view(-1, 5)) - COMPLETED
    • flatten() : Convert to 1D (tensor.flatten()) - COMPLETED
    • squeeze() : Remove size-1 dims (tensor.squeeze()) - COMPLETED
    • unsqueeze() : Add size-1 dims (tensor.unsqueeze(0)) - COMPLETED
  2. MATRIX OPERATIONS: - COMPLETED

    • matmul() / @ : Matrix multiplication with broadcasting - COMPLETED
    • dot() : Vector dot product for 1D arrays - COMPLETED
  3. TENSOR COMBINATION:

    • cat() : Join along existing dim (torch.cat([a, b], dim=0))
    • stack() : Join along new dim (torch.stack([a, b]))

IMPORTANT FEATURES (Medium Priority):

  1. ADVANCED ACTIVATIONS:

    • lrelu() : AVAILABLE as forgeNN.functions.activation.LRELU (needs fixing) - FIXED
    • swish() : AVAILABLE as forgeNN.functions.activation.SWISH (needs fixing) - FIXED
    • gelu() : Gaussian Error Linear Unit (missing) - ADDED
    • elu() : Exponential Linear Unit (missing)
  2. TENSOR UTILITIES:

    • split() : Split into chunks
    • chunk() : Split into equal pieces
    • permute() : Rearrange dimensions
    • contiguous() : Make tensor memory-contiguous (tensor.contiguous()) - COMPLETED
  3. INDEXING:

    • Boolean indexing: tensor[tensor > 0]
    • Fancy indexing: tensor[indices]
    • gather() : Select along dimension

NICE-TO-HAVE (Lower Priority):

  1. LINEAR ALGEBRA:

    • norm() : Vector/matrix norms
    • det() : Matrix determinant
    • inverse() : Matrix inverse
  2. CONVENIENCE:

    • clone() : Deep copy
    • detach() : Remove from computation graph
    • requires_grad_(): In-place grad requirement change
  3. INFRASTRUCTURE:

    • Better error messages for shape mismatches
    • Memory-efficient operations
    • API consistency improvements
    • Comprehensive documentation

PRIORITY ORDER:

  1. Shape operations (reshape, transpose, flatten)
  2. Matrix multiplication (matmul, @)
  3. Tensor combination (cat, stack)
  4. More activations (leaky_relu, gelu)
  5. Documentation and error handling

Contributing

I am not currently accepting contributions, but I'm always open to suggestions and feedback!

Acknowledgments

  • Inspired by educational automatic differentiation tutorials (micrograd)
  • Built for both learning and production use
  • Optimized with modern NumPy practices
  • Available on PyPI: pip install forgeNN

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

forgenn-1.2.0.tar.gz (43.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

forgenn-1.2.0-py3-none-any.whl (29.4 kB view details)

Uploaded Python 3

File details

Details for the file forgenn-1.2.0.tar.gz.

File metadata

  • Download URL: forgenn-1.2.0.tar.gz
  • Upload date:
  • Size: 43.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.2.0.tar.gz
Algorithm Hash digest
SHA256 2522b99c4151c1141fee41939962a5d74fe8f9e460d2488bff939e0bfc1074d5
MD5 b84fdf8a4983f2ad61172bcdcc1f08d1
BLAKE2b-256 fb890497bf52ef1630e7147c81dd2086ecd987a0dad8e546e5ae4df84722b550

See more details on using hashes here.

File details

Details for the file forgenn-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: forgenn-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 29.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 53504155d5bf749f28ca6a25d3b4e42f114a5f4e252ec66ea53a73c99852712f
MD5 9612089de050bf3c1b674d2aa4fb4747
BLAKE2b-256 02b182a4d849809dee3600cafcccf6560f9543f19825e7cf9a6dc827f02259ce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page