Skip to main content

A From Scratch Neural Network Framework with Educational Purposes

Project description

forgeNN

Table of Contents

Python 3.8+ NumPy PyPI version Downloads License

Installation

pip install forgeNN

Overview

forgeNN is a modern neural network framework that is developed by a solo developer learning about ML. Features vectorized operations for high-speed training.

Key Features

  • Vectorized Operations: NumPy-powered batch processing (100x+ speedup)
  • Dynamic Computation Graphs: Automatic differentiation with gradient tracking
  • Complete Neural Networks: From simple neurons to complex architectures
  • Production Loss Functions: Cross-entropy, MSE with numerical stability

Performance vs PyTorch

forgeNN is 3.52x faster than PyTorch on small models!

Metric PyTorch forgeNN Advantage
Training Time (MNIST) 64.72s 30.84s 2.10x faster
Test Accuracy 97.30% 97.37% +0.07% better
Small Models (<109k params) Baseline 3.52x faster Massive speedup

📊 See Full Comparison Guide for detailed benchmarks, syntax differences, and when to use each framework.

MNIST Benchmark Results

Quick Start

High-Performance Training

import forgeNN
from sklearn.datasets import make_classification

# Generate dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3)

# Create vectorized model  
model = forgeNN.VectorizedMLP(20, [64, 32], 3)
optimizer = forgeNN.VectorizedOptimizer(model.parameters(), lr=0.01)

# Fast batch training
for epoch in range(10):
    # Convert to tensors
    x_batch = forgeNN.Tensor(X)
    
    # Forward pass
    logits = model(x_batch)
    loss = forgeNN.cross_entropy_loss(logits, y)
    
    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    acc = forgeNN.accuracy(logits, y)
    print(f"Epoch {epoch}: Loss = {loss.data:.4f}, Acc = {acc*100:.1f}%")

Architecture

  • Main API: forgeNN.Tensor, forgeNN.VectorizedMLP (production use)
  • Legacy API: forgeNN.legacy.* (educational purposes)
  • Functions: Complete activation and loss function library
  • Examples: example.py - Complete MNIST classification demo

Performance

Implementation Speed MNIST Accuracy
Vectorized 38,000+ samples/sec 93%+ in <2s

Highlights:

  • 100x+ speedup over scalar implementations
  • Production-ready performance with educational clarity
  • Memory efficient vectorized operations

Complete Example

See example.py for a full MNIST classification demo achieving professional results.

Links

TODO List

Based on comprehensive comparison with PyTorch and NumPy:

CRITICAL MISSING FEATURES (High Priority):

  1. TENSOR SHAPE OPERATIONS:

    • reshape() : Change tensor dimensions (tensor.reshape(2, -1)) - COMPLETED
    • transpose() : Swap dimensions (tensor.transpose(0, 1)) - COMPLETED
    • view() : Memory-efficient reshape (tensor.view(-1, 5)) - COMPLETED
    • flatten() : Convert to 1D (tensor.flatten()) - COMPLETED
    • squeeze() : Remove size-1 dims (tensor.squeeze()) - COMPLETED
    • unsqueeze() : Add size-1 dims (tensor.unsqueeze(0)) - COMPLETED
  2. MATRIX OPERATIONS:

    • matmul() / @ : Matrix multiplication with broadcasting - COMPLETED
    • dot() : Vector dot product
  3. TENSOR COMBINATION:

    • cat() : Join along existing dim (torch.cat([a, b], dim=0))
    • stack() : Join along new dim (torch.stack([a, b]))

IMPORTANT FEATURES (Medium Priority):

  1. ADVANCED ACTIVATIONS:

    • lrelu() : AVAILABLE as forgeNN.functions.activation.LRELU (needs fixing)
    • swish() : AVAILABLE as forgeNN.functions.activation.SWISH (needs fixing)
    • gelu() : Gaussian Error Linear Unit (missing)
    • elu() : Exponential Linear Unit (missing)
  2. TENSOR UTILITIES:

    • split() : Split into chunks
    • chunk() : Split into equal pieces
    • permute() : Rearrange dimensions
    • contiguous() : Make tensor memory-contiguous (tensor.contiguous()) - COMPLETED
  3. INDEXING:

    • Boolean indexing: tensor[tensor > 0]
    • Fancy indexing: tensor[indices]
    • gather() : Select along dimension

NICE-TO-HAVE (Lower Priority):

  1. LINEAR ALGEBRA:

    • norm() : Vector/matrix norms
    • det() : Matrix determinant
    • inverse() : Matrix inverse
  2. CONVENIENCE:

    • clone() : Deep copy
    • detach() : Remove from computation graph
    • requires_grad_(): In-place grad requirement change
  3. INFRASTRUCTURE:

    • Better error messages for shape mismatches
    • Memory-efficient operations
    • API consistency improvements
    • Comprehensive documentation

PRIORITY ORDER:

  1. Shape operations (reshape, transpose, flatten)
  2. Matrix multiplication (matmul, @)
  3. Tensor combination (cat, stack)
  4. More activations (leaky_relu, gelu)
  5. Documentation and error handling

Contributing

I am not currently accepting contributions, but I'm always open to suggestions and feedback!

Acknowledgments

  • Inspired by educational automatic differentiation tutorials
  • Built for both learning and production use
  • Optimized with modern NumPy practices
  • Available on PyPI: pip install forgeNN

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

forgenn-1.0.4a0.tar.gz (42.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

forgenn-1.0.4a0-py3-none-any.whl (30.6 kB view details)

Uploaded Python 3

File details

Details for the file forgenn-1.0.4a0.tar.gz.

File metadata

  • Download URL: forgenn-1.0.4a0.tar.gz
  • Upload date:
  • Size: 42.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.0.4a0.tar.gz
Algorithm Hash digest
SHA256 bfda082de305b69fb63764e8ac7b0e3a8defaa60cee2c1f185278e5a2f95f363
MD5 3243195c18209b73186fa9939994db0b
BLAKE2b-256 8eadba82e085dd241a42cd261564646ccb2406c1d403cc6f3988a788055d0e9a

See more details on using hashes here.

File details

Details for the file forgenn-1.0.4a0-py3-none-any.whl.

File metadata

  • Download URL: forgenn-1.0.4a0-py3-none-any.whl
  • Upload date:
  • Size: 30.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.0.4a0-py3-none-any.whl
Algorithm Hash digest
SHA256 e603ea5884b0f451a65b63ef065577904967f58f2fbb84efe4117b6a8560e471
MD5 eb2be94ebac07146ca0621a1a7e799ce
BLAKE2b-256 4c138a29d439ac9f341665c83afbd0feec414e7cc6ed68b437a3cf270927c139

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page