Skip to main content

Deep Learning Framework

Project description

Deep-Learning-Framework

The Frame work allows us to build deep learning models more easily and quickly, without getting into the details of underlying algorithms. They provide a clear and concise way for defining models using a collection of pre-built and optimized components

important

you must download kaggle json file from you kaggle account and put it in .kaggle folder in your user folder

The Frame Work consits of the following modules

Data pre-processing Module

The Module responsible for loading ,transforming and normalizing data.

Evaluation Module

The Module responsible for calculating model accuracy, number of true positives,false positives,true negatives, false negatives,the precision score, recall score, F1 score and buliding the confusion matrix.

Utils Module

The Module responsible for reading and saving models compressed or uncompressed

Visualization Module

The Module responsible for view input samples wether they are RGB or gray scale, plotting the necessary graphs live during training for accuracy and loss calculation, plotting static graphs between any given inputs and outputs and the table of the confusion matrix.

Optimizer Module

The Module responsible for implementing diffrent optimization algorithms: - Adam. - AdaDelta. - AdaGrade. - RMSProp. - Momentum. - vanilla gradient descent.

Core Module

The core of the Frame work responsible for building the neural network and consists of the following submodules.

Net Module

Defines the neural network (its layers the activation function of each layer and the method for loss calulations).

Layers

Layer is a callable object, where calling performs the forward pass and calculates local gradients.

Linear

A simple fully connected layer.

CONV

It is the first layer to extract features from the input image. Here we define the kernel as the layer parameter. We perform matrix multiplication operations on the input image using the kernel.

LCN

Local Contrast Normalization is a type of normalization that performs local subtraction and division normalizations, enforcing a sort of local competition between adjacent features in a feature map, and between features at the same spatial location in different feature maps.

MaxPool

we perform pooling to reduce the number of parameters and computations. There are different types of pooling operations, the most common ones are max pooling and average pooling.

Flatten

It is used to convert the data into 1D arrays to create a single feature vector. After flattening we forward the data to a fully connected layer for final classification.

Losses

An abstract Module for implementing diffrent losses algorithms.

Multinomial_Logistic_Regression

A class for implementing Loss calculations and the gradient using Multinomial_Logistic_Regression .

MeanSquareLoss

A class for implementing Loss calculations and the gradient using mean square loss.

Activations

Implementation of diffrent activation functions:

  • ReLU
  • Leaky ReLU
  • Sigmoid
  • softmax

Model Class

The module that groups all the framework components together, perform training and evaluation

How to use:

Below is an example for bulding and training a model with a neural network that has two layers with sigmoid activation function, adaDelta optimaization and cross entropy loss.

code sample:

from activations import *

from layer import *

from losses import *

from linear import *

from CNN import *

from net import *

from model import *

from Datamodule import *

from Evaluation import *

from optimizer import *

from Utils import *

from Visualization import *

train,validation,test = load_data()

train_label,validation_label,train_array,validation_array,test_array = transform_data(train,validation,test)

train_array,validation_array = normalize_data(train_array,validation_array)

optim1 = Optimizer("AdaGrad", alpha = 0.2)

optim2 = Optimizer("AdaGrad", alpha = 0.2)

model_linear = Model(layers=[Linear(784,20,optim1),Sigmoid(),Linear(20,10,optim2),Softmax()], loss = Multinomial_Logistic_Regression())

pred_training_np_AdaGrad,losses_training_np_AdaGrad,pred_validation_np_AdaGrad,losses_validation_np_AdaGrad,epochs_no_AdaGrad = model_linear.train_by_Loss(train_array,train_label,validation_array,validation_label,0.5) losses_validation_np_AdaGrad = losses_validation_np_AdaGrad.reshape(-1,1)

dict_losses_validation = {"loss_validation_linear" :losses_validation_np_AdaGrad } viz = visualization()

viz.live_visualization(dict_losses_validation)

###accuracy

arr_acc_AdaGrad = model_linear.evaluate_accuracy(validation_label,pred_validation_np_AdaGrad)

###visualizing accuracy

dict_x ={"epochs" :list(range(1, epochs_no_AdaGrad + 1))}

dict_y = {"validation accuracy":arr_acc_AdaGrad}

viz.visualize(dict_x,dict_y)

###confusion matrix

evaluate = Evaluation()

arr_pred_AdaGrad_Epoch = pred_validation_np_AdaGrad.reshape(epochs_no_AdaGrad,-1,1)[0] confusionMatrixDict = evaluate.get_confusion_matrix_components(10,validation_label,arr_pred_AdaGrad_Epoch)

viz = visualization()

viz.draw_table(confusionMatrixDict)

###saving model

utils_ = utils()

utils_.save_model_compressed(model_linear,"model_linear")

###loading model

loaded_model_linear = utils_.load_model_compressed("model_linear")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

PyVisionTeam17-0.0.4.tar.gz (38.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

PyVisionTeam17-0.0.4-py3-none-any.whl (19.9 kB view details)

Uploaded Python 3

File details

Details for the file PyVisionTeam17-0.0.4.tar.gz.

File metadata

  • Download URL: PyVisionTeam17-0.0.4.tar.gz
  • Upload date:
  • Size: 38.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.7

File hashes

Hashes for PyVisionTeam17-0.0.4.tar.gz
Algorithm Hash digest
SHA256 1e7e215e5bd990aabbc321c851832347d5207d12f9948508b5ef940374091cbe
MD5 8ac82c44f0e061e3a812824316a873fa
BLAKE2b-256 549a20a661adef17f7c48f69ce4dc60580b763df2bb5ed0847853bb840a4f025

See more details on using hashes here.

File details

Details for the file PyVisionTeam17-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: PyVisionTeam17-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 19.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.7

File hashes

Hashes for PyVisionTeam17-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d0d9f6891e1840882769e166c5920a396a40442b5e655237487b724c3e7f12e2
MD5 2cb20e4871de074ca5dd28ff8bc91811
BLAKE2b-256 ea3e1520d5741f47bd357570544267526f9f08809db041dd036d35705d38c818

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page