Skip to main content

A From Scratch Neural Network Framework with Educational Purposes

Project description

forgeNN

Table of Contents

Python 3.8+ NumPy PyPI version Downloads License

Installation

pip install forgeNN

Overview

forgeNN is a modern neural network framework that is developed by a solo developer learning about ML. Features vectorized operations for high-speed training.

Key Features

  • Vectorized Operations: NumPy-powered batch processing (100x+ speedup)
  • Dynamic Computation Graphs: Automatic differentiation with gradient tracking
  • Complete Neural Networks: From simple neurons to complex architectures
  • Production Loss Functions: Cross-entropy, MSE with numerical stability

Performance vs PyTorch

forgeNN is 3.52x faster than PyTorch on small models!

Metric PyTorch forgeNN Advantage
Training Time (MNIST) 64.72s 30.84s 2.10x faster
Test Accuracy 97.30% 97.37% +0.07% better
Small Models (<109k params) Baseline 3.52x faster Massive speedup

📊 See Full Comparison Guide for detailed benchmarks, syntax differences, and when to use each framework.

MNIST Benchmark Results

Quick Start

High-Performance Training

import forgeNN
from sklearn.datasets import make_classification

# Generate dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3)

# Create vectorized model  
model = forgeNN.VectorizedMLP(20, [64, 32], 3)
optimizer = forgeNN.VectorizedOptimizer(model.parameters(), lr=0.01)

# Fast batch training
for epoch in range(10):
    # Convert to tensors
    x_batch = forgeNN.Tensor(X)
    
    # Forward pass
    logits = model(x_batch)
    loss = forgeNN.cross_entropy_loss(logits, y)
    
    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    acc = forgeNN.accuracy(logits, y)
    print(f"Epoch {epoch}: Loss = {loss.data:.4f}, Acc = {acc*100:.1f}%")

Architecture

  • Main API: forgeNN.Tensor, forgeNN.VectorizedMLP (high-performance neural networks)
  • Activation Functions: forgeNN.RELU, forgeNN.SWISH, etc. + string/callable support
  • Examples: example.py - Complete MNIST classification demo

Performance

Implementation Speed MNIST Accuracy
Vectorized 38,000+ samples/sec 93%+ in <2s

Highlights:

  • 100x+ speedup over scalar implementations
  • Production-ready performance with educational clarity
  • Memory efficient vectorized operations

Complete Example

See example.py for a full MNIST classification demo achieving professional results.

Links

TODO List

Based on comprehensive comparison with PyTorch and NumPy:

CRITICAL MISSING FEATURES (High Priority):

  1. TENSOR SHAPE OPERATIONS:

    • reshape() : Change tensor dimensions (tensor.reshape(2, -1)) - COMPLETED
    • transpose() : Swap dimensions (tensor.transpose(0, 1)) - COMPLETED
    • view() : Memory-efficient reshape (tensor.view(-1, 5)) - COMPLETED
    • flatten() : Convert to 1D (tensor.flatten()) - COMPLETED
    • squeeze() : Remove size-1 dims (tensor.squeeze()) - COMPLETED
    • unsqueeze() : Add size-1 dims (tensor.unsqueeze(0)) - COMPLETED
  2. MATRIX OPERATIONS:

    • matmul() / @ : Matrix multiplication with broadcasting - COMPLETED
    • dot() : Vector dot product
  3. TENSOR COMBINATION:

    • cat() : Join along existing dim (torch.cat([a, b], dim=0))
    • stack() : Join along new dim (torch.stack([a, b]))

IMPORTANT FEATURES (Medium Priority):

  1. ADVANCED ACTIVATIONS:

    • lrelu() : AVAILABLE as forgeNN.functions.activation.LRELU (needs fixing)
    • swish() : AVAILABLE as forgeNN.functions.activation.SWISH (needs fixing)
    • gelu() : Gaussian Error Linear Unit (missing)
    • elu() : Exponential Linear Unit (missing)
  2. TENSOR UTILITIES:

    • split() : Split into chunks
    • chunk() : Split into equal pieces
    • permute() : Rearrange dimensions
    • contiguous() : Make tensor memory-contiguous (tensor.contiguous()) - COMPLETED
  3. INDEXING:

    • Boolean indexing: tensor[tensor > 0]
    • Fancy indexing: tensor[indices]
    • gather() : Select along dimension

NICE-TO-HAVE (Lower Priority):

  1. LINEAR ALGEBRA:

    • norm() : Vector/matrix norms
    • det() : Matrix determinant
    • inverse() : Matrix inverse
  2. CONVENIENCE:

    • clone() : Deep copy
    • detach() : Remove from computation graph
    • requires_grad_(): In-place grad requirement change
  3. INFRASTRUCTURE:

    • Better error messages for shape mismatches
    • Memory-efficient operations
    • API consistency improvements
    • Comprehensive documentation

PRIORITY ORDER:

  1. Shape operations (reshape, transpose, flatten)
  2. Matrix multiplication (matmul, @)
  3. Tensor combination (cat, stack)
  4. More activations (leaky_relu, gelu)
  5. Documentation and error handling

Contributing

I am not currently accepting contributions, but I'm always open to suggestions and feedback!

Acknowledgments

  • Inspired by educational automatic differentiation tutorials
  • Built for both learning and production use
  • Optimized with modern NumPy practices
  • Available on PyPI: pip install forgeNN

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

forgenn-1.1.0.tar.gz (34.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

forgenn-1.1.0-py3-none-any.whl (19.3 kB view details)

Uploaded Python 3

File details

Details for the file forgenn-1.1.0.tar.gz.

File metadata

  • Download URL: forgenn-1.1.0.tar.gz
  • Upload date:
  • Size: 34.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.1.0.tar.gz
Algorithm Hash digest
SHA256 4ad48def39d135110421a32846f9cceb5cbdedd9646a1266d664da269d2582d5
MD5 4e7e1aff6aa5d444b953165fa6bf5c32
BLAKE2b-256 82f7e4088abf4b74d940019ec225d1ba1b37496aab062e6a2d0f12dd0537b218

See more details on using hashes here.

File details

Details for the file forgenn-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: forgenn-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 19.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for forgenn-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e3ab3afe7d1f023287b0548084b979a326aa245a69685840dfff9cbb392f7945
MD5 57582cd5dcafe7fb0e484682f7c63dc2
BLAKE2b-256 471d94c41b163a4b5978d2faf210a54b0be98e8850795dd1fb8b3e54aaed2654

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page