Skip to main content

Mini-library that implements a simple version of a feedforward neural network (FNN) and convolutional neural network (CNN) from scratch using Python and PyTorch

Project description

Custom Neural Network Library

This repository is a mini-library that implements a simple version of a feedforward neural network (FNN) and convolutional neural network (CNN) from scratch using Python and PyTorch. PyTorch is used solely for mathematical and element-wise operations on tensors (without using autograd), and for speeding up computations by utilizing the GPU. Simply put, PyTorch is used as a replacement for NumPy but with GPU acceleration. The library provides basic functionalities for building, training, and evaluating custom neural network models for both regression and classification tasks.

Features

  • Feedforward Neural Network (FNN): A fully connected neural network suitable for regression and classification tasks.
  • Convolutional Neural Network (CNN): A simplified CNN implementation with convolutional and max-pooling layers.
  • Activation Functions: Includes ReLU, Leaky ReLU, Sigmoid, and Linear activations.
  • Loss Functions: Support for mean squared error (MSE), binary cross-entropy (BCE) and categorical cross-entropy (CCE) losses.
  • Optimizers: Implementations of stochastic gradient descent (SGD) and Adam optimizers.
  • Metrics: Accuracy metric for classification tasks, especially useful for one-hot encoded data, and R2 score for regression tasks.

Installation

pip install vladk-neural-network

Usage

Data Format examples:

Example for regression:

# sample shape (2, 1) - 2 input values, 1 output value
dataset = [
    {
        "input": [0.1, 0.2],
        "output": [0.15],
    },
    {
        "input": [0.8, 0.9],
        "output": [0.7],
    },
]

Example for classification, output values one-hot encoded:

# sample shape (4, 2) - 4 input values, 2 output one-hot encoded values
dataset = [
    {
        "input": [0.13, 0.22, 0.37, 0.41],
        "output": [1.0, 0.0],
    },
    {
        "input": [0.76, 0.87, 0.91, 0.93],
        "output": [0.0, 1.0],
    },
]

Model creation examples:

Feedforward Neural Network for regression:

from vladk_neural_network.model.activation import Linear, Relu
from vladk_neural_network.model.base import NeuralNetwork
from vladk_neural_network.model.layer import FullyConnected, Input
from vladk_neural_network.model.loss import MeanSquaredError
from vladk_neural_network.model.metric import R2Score
from vladk_neural_network.model.optimizer import SGD

# Build model
layers = [
    FullyConnected(64, Relu()),
    FullyConnected(64, Relu()),
    FullyConnected(1, Linear()),
]
nn = NeuralNetwork(
    Input(2),
    layers,
    optimizer=SGD(),
    loss=MeanSquaredError(),
    metric=R2Score()
)

# Train model
history = nn.fit(train_dataset, test_dataset, epochs=20, batch_size=1, verbose=True)

# Using model for prediction
prediction = nn.predict(test_dataset)

Convolutional Neural Network for classification:

from vladk_neural_network.model.activation import LeakyRelu, Linear
from vladk_neural_network.model.base import NeuralNetwork
from vladk_neural_network.model.layer import (
    Convolutional,
    Flatten,
    FullyConnected,
    Input3D,
    MaxPool2D,
)
from vladk_neural_network.model.loss import CategoricalCrossEntropy
from vladk_neural_network.model.metric import AccuracyOneHot
from vladk_neural_network.model.optimizer import Adam

# Build model using gpu acceleration and applying argmax convert to raw prediction probabilities
layers = [
    Convolutional(LeakyRelu(), filters_num=4, kernel_size=3, padding_type="same"),
    Convolutional(LeakyRelu(), filters_num=8, kernel_size=3),
    Convolutional(LeakyRelu(), filters_num=16, kernel_size=3),
    MaxPool2D(),
    Flatten(),
    FullyConnected(64, LeakyRelu()),
    FullyConnected(10, Linear()),
]
cnn = NeuralNetwork(
    Input3D((1, 28, 28)),
    layers,
    optimizer=Adam(),
    loss=CategoricalCrossEntropy(),
    metric=AccuracyOneHot(),
    convert_prediction='argmax',
    use_gpu=True
)

# Train model
cnn.fit(train_dataset, test_dataset, epochs=10, batch_size=1, verbose=True)

# Using model for prediction
prediction = cnn.predict(test_dataset)

Several examples, including training feedforward and convolutional neural networks, are available in the form of Jupyter notebooks in the notebooks/ folder. You can view and run these examples to understand how to use the library for different tasks.

License

This project is licensed under the MIT License. See the LICENSE file for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vladk_neural_network-0.1.16.tar.gz (12.8 kB view details)

Uploaded Source

Built Distribution

vladk_neural_network-0.1.16-py3-none-any.whl (13.3 kB view details)

Uploaded Python 3

File details

Details for the file vladk_neural_network-0.1.16.tar.gz.

File metadata

  • Download URL: vladk_neural_network-0.1.16.tar.gz
  • Upload date:
  • Size: 12.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Linux/6.8.0-41-generic

File hashes

Hashes for vladk_neural_network-0.1.16.tar.gz
Algorithm Hash digest
SHA256 c3d24e6ed8a408d46f1c0d1138d6d4c7283d3f147f6a3a9ffd029fab3bd5c0d4
MD5 0c049c8d3d674b48dfa39677b44dc42a
BLAKE2b-256 0f36bd0c0b15ab1cc7a42703a7bd2d3ffda58a8355bf2976478364a90c9ecda5

See more details on using hashes here.

File details

Details for the file vladk_neural_network-0.1.16-py3-none-any.whl.

File metadata

File hashes

Hashes for vladk_neural_network-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 f608e9bb2133a83ee0ec73df33bcbecf835a50f6928cebf4595103c81b250670
MD5 4f6d6553eac7c6a9bea68b4daf190023
BLAKE2b-256 cb075fa37b87e495502b415f23b20a7dee6a9e133fbc9191afb3b455cc1a1043

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page