A simple neural network implementation.
Project description
Fawern-NN: Neural Network Library in Pure Python
A lightweight neural network implementation built from scratch using only NumPy. Fawern-NN provides a simple, intuitive interface for building, training, and evaluating neural networks with minimal dependencies.
Features
- Pure Python implementation with minimal dependencies (NumPy, Matplotlib, scikit-learn)
- Keras-inspired API for easy model building and training
- Support for various activation functions:
- Sigmoid
- Tanh
- ReLU
- Leaky ReLU
- Softmax
- Linear
- Customizable network architecture with flexible layer definitions
- Support for batch training
- Built-in evaluation metrics and visualization tools
- Extensible design for adding custom activation functions
Installation
pip install fawern-nn
Quick Start
XOR Problem Example
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# XOR problem
X = np.array([[0,0], [0,1], [1,0], [1,1]])
y = np.array([[0], [1], [1], [0]])
# Create model
model = Layers()
# Add layers
model.add(NInput(2))
model.add(NLayer(4, activation='tanh'))
model.add(NLayer(4, activation='tanh'))
model.add(NLayer(1, activation='sigmoid'))
# Train model
model.train_model(X, y, loss_type='categorical', iterations=10000, learning_rate=0.1, batch_size=4)
# Evaluate model
accuracy, conf_matrix = model.evaluate_trained_model()
print(f"Accuracy: {accuracy}")
print(f"Confusion Matrix:\n{conf_matrix}")
# Visualize training progress
model.show_loss_graph()
Core Components
Layer Classes
Layers
The main model container for building and training neural networks.
model = Layers()
Methods:
add(layer): Add a layer to the modeltrain_model(x, y, loss_type, iterations, learning_rate, batch_size): Train the modelx: Input data (numpy array)y: Target data (numpy array)loss_type: Type of loss function ('categorical', 'mse', 'mae')iterations: Number of training iterationslearning_rate: Learning rate for weight updatesbatch_size: Size of batches for training
evaluate_trained_model(): Evaluate model performanceshow_loss_graph(): Visualize training loss over iterationspredict_input(): Get model predictions
NInput
The input layer specification.
input_layer = NInput(input_shape)
Parameters:
input_shape: Number of input features
NLayer
The standard neural network layer.
layer = NLayer(num_neurons, activation='linear', use_bias=True)
Parameters:
num_neurons: Number of neurons in the layeractivation: Activation function (default: 'linear')use_bias: Whether to use bias (default: True)function_name: Optional name for custom activation functionfunction_formula: Optional formula for custom activation function
Methods:
set_weights(output_shape, new_weights): Set layer weightsget_weights(): Get layer weightsset_activation(activation): Set activation functionget_activation(): Get activation functionforward(input_data): Perform forward propagation
FlattenLayer
Layer to flatten multi-dimensional input.
flatten = FlattenLayer()
Methods:
forward(input_data): Flatten input data
Activation Functions
The ActivationFunctions class provides various activation functions:
sigmoid: Sigmoid activation (0 to 1)tanh: Hyperbolic tangent (-1 to 1)relu: Rectified Linear Unit (max(0, x))leaky_relu: Leaky ReLU (small slope for negative inputs)linear: Linear/identity functionsoftmax: Softmax function for multi-class classification
Adding Custom Activation Functions
from fawern_nn.nn import ActivationFunctions
# Create activation functions instance
activations = ActivationFunctions()
# Define custom function
def custom_activation(x):
return x**2
# Define its derivative
def custom_activation_derivative(x):
return 2*x
# Add to available functions
activations.add_activation_function('custom', custom_activation)
activations.add_activation_function('custom_derivative', custom_activation_derivative)
Examples
Binary Classification
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# Create binary classification dataset
X = np.random.randn(100, 2)
y = np.array([(1 if x[0] + x[1] > 0 else 0) for x in X]).reshape(-1, 1)
# Create model
model = Layers()
model.add(NInput(2))
model.add(NLayer(4, activation='relu'))
model.add(NLayer(1, activation='sigmoid'))
# Train model
model.train_model(X, y, loss_type='categorical', iterations=1000, learning_rate=0.01)
# Evaluate and visualize
accuracy, conf_matrix = model.evaluate_trained_model()
print(f"Accuracy: {accuracy}")
model.show_loss_graph()
Regression
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# Create regression dataset
X = np.linspace(-5, 5, 100).reshape(-1, 1)
y = np.sin(X) + 0.1 * np.random.randn(100, 1)
# Create model
model = Layers()
model.add(NInput(1))
model.add(NLayer(10, activation='tanh'))
model.add(NLayer(1, activation='linear'))
# Train model
model.train_model(X, y, loss_type='mse', iterations=2000, learning_rate=0.005)
# Evaluate
mse = model.evaluate_trained_model()
print(f"Mean Squared Error: {mse}")
model.show_loss_graph()
Multi-Layer Network
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# Create multi-class dataset (simplified MNIST-like)
X = np.random.randn(500, 28*28) # 28x28 flattened images
y = np.eye(10)[np.random.randint(0, 10, size=500)] # One-hot encoded labels
# Create model
model = Layers()
model.add(NInput(28*28))
model.add(NLayer(128, activation='relu'))
model.add(NLayer(64, activation='relu'))
model.add(NLayer(10, activation='softmax'))
# Train model
model.train_model(X, y, loss_type='categorical', iterations=50, learning_rate=0.001, batch_size=32)
# Evaluate model
accuracy, conf_matrix = model.evaluate_trained_model()
print(f"Accuracy: {accuracy}")
model.show_loss_graph()
Technical Details
Backpropagation Implementation
Fawern-NN implements traditional backpropagation for training neural networks:
- Forward pass through all layers
- Calculate error at output layer
- Propagate error backward through the network
- Update weights based on calculated gradients
The implementation supports mini-batch training for better performance on larger datasets.
Loss Functions
categorical: For classification problems (uses accuracy and confusion matrix for evaluation)mse: Mean Squared Error for regression problemsmae: Mean Absolute Error for regression problems
Requirements
- Python 3.7+
- NumPy >= 1.19.0
- Matplotlib >= 3.3.0
- scikit-learn >= 0.24.0
License
MIT License
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Author
Fawern - GitHub
Project Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fawern_nn-0.1.1.tar.gz.
File metadata
- Download URL: fawern_nn-0.1.1.tar.gz
- Upload date:
- Size: 8.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
162106ced3bbd36ee14a864b02b0856bf7ec70414565aeac78f8d5449c3840a7
|
|
| MD5 |
c0285969a096541625b4334d50117235
|
|
| BLAKE2b-256 |
ae371776870423b6f0dd85afce1aba869936bc2d0c08494b02b8b7ec3c7f0663
|
File details
Details for the file fawern_nn-0.1.1-py3-none-any.whl.
File metadata
- Download URL: fawern_nn-0.1.1-py3-none-any.whl
- Upload date:
- Size: 8.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
181d4b26c57fcfd374f189f946935524728d08bc9ac5a693a4d7d564b0148e98
|
|
| MD5 |
11e9acb6256e89900b6c6aa9be0bf36a
|
|
| BLAKE2b-256 |
6ed130bb4f35df0713b54b143170ae674249f688e3fafe5bb7472f6d99cc37e8
|