Deep Learning Framework
Project description
Deep-Learning-Framework
The Frame work allows us to build deep learning models more easily and quickly, without getting into the details of underlying algorithms. They provide a clear and concise way for defining models using a collection of pre-built and optimized components
important
you must download kaggle json file from you kaggle account and put it in .kaggle folder in your user folder
The Frame Work consits of the following modules
- Data pre-processing Module
- Evaluation Module
- Utils Module
- Visualization Module
- Optimizer Module
- Core Module
- Model Class
Data pre-processing Module
The Module responsible for loading ,transforming and normalizing data.
Evaluation Module
The Module responsible for calculating model accuracy, number of true positives,false positives,true negatives, false negatives,the precision score, recall score, F1 score and buliding the confusion matrix.
Utils Module
The Module responsible for reading and saving models compressed or uncompressed
Visualization Module
The Module responsible for view input samples wether they are RGB or gray scale, plotting the necessary graphs live during training for accuracy and loss calculation, plotting static graphs between any given inputs and outputs and the table of the confusion matrix.
Optimizer Module
The Module responsible for implementing diffrent optimization algorithms: - Adam. - AdaDelta. - AdaGrade. - RMSProp. - Momentum. - vanilla gradient descent.
Core Module
The core of the Frame work responsible for building the neural network and consists of the following submodules.
Net Module
Defines the neural network (its layers the activation function of each layer and the method for loss calulations).
Layers
Layer is a callable object, where calling performs the forward pass and calculates local gradients.
Linear
A simple fully connected layer.
CONV
It is the first layer to extract features from the input image. Here we define the kernel as the layer parameter. We perform matrix multiplication operations on the input image using the kernel.
LCN
Local Contrast Normalization is a type of normalization that performs local subtraction and division normalizations, enforcing a sort of local competition between adjacent features in a feature map, and between features at the same spatial location in different feature maps.
MaxPool
we perform pooling to reduce the number of parameters and computations. There are different types of pooling operations, the most common ones are max pooling and average pooling.
Flatten
It is used to convert the data into 1D arrays to create a single feature vector. After flattening we forward the data to a fully connected layer for final classification.
Losses
An abstract Module for implementing diffrent losses algorithms.
Multinomial_Logistic_Regression
A class for implementing Loss calculations and the gradient using Multinomial_Logistic_Regression .
MeanSquareLoss
A class for implementing Loss calculations and the gradient using mean square loss.
Activations
Implementation of diffrent activation functions:
- ReLU
- Leaky ReLU
- Sigmoid
- softmax
Model Class
The module that groups all the framework components together, perform training and evaluation
How to use:
Below is an example for bulding and training a model with a neural network that has two layers with sigmoid activation function, adaDelta optimaization and cross entropy loss.
code sample:
from activations import *
from layer import *
from losses import *
from linear import *
from CNN import *
from net import *
from model import *
from Datamodule import *
from Evaluation import *
from optimizer import *
from Utils import *
from Visualization import *
train,validation,test = load_data()
train_label,validation_label,train_array,validation_array,test_array = transform_data(train,validation,test)
train_array,validation_array = normalize_data(train_array,validation_array)
optim1 = Optimizer("AdaGrad", alpha = 0.2)
optim2 = Optimizer("AdaGrad", alpha = 0.2)
model_linear = Model(layers=[Linear(784,20,optim1),Sigmoid(),Linear(20,10,optim2),Softmax()], loss = Multinomial_Logistic_Regression())
pred_training_np_AdaGrad,losses_training_np_AdaGrad,pred_validation_np_AdaGrad,losses_validation_np_AdaGrad,epochs_no_AdaGrad = model_linear.train_by_Loss(train_array,train_label,validation_array,validation_label,0.5) losses_validation_np_AdaGrad = losses_validation_np_AdaGrad.reshape(-1,1)
dict_losses_validation = {"loss_validation_linear" :losses_validation_np_AdaGrad } viz = visualization()
viz.live_visualization(dict_losses_validation)
###accuracy
arr_acc_AdaGrad = model_linear.evaluate_accuracy(validation_label,pred_validation_np_AdaGrad)
###visualizing accuracy
dict_x ={"epochs" :list(range(1, epochs_no_AdaGrad + 1))}
dict_y = {"validation accuracy":arr_acc_AdaGrad}
viz.visualize(dict_x,dict_y)
###confusion matrix
evaluate = Evaluation()
arr_pred_AdaGrad_Epoch = pred_validation_np_AdaGrad.reshape(epochs_no_AdaGrad,-1,1)[0] confusionMatrixDict = evaluate.get_confusion_matrix_components(10,validation_label,arr_pred_AdaGrad_Epoch)
viz = visualization()
viz.draw_table(confusionMatrixDict)
###saving model
utils_ = utils()
utils_.save_model_compressed(model_linear,"model_linear")
###loading model
loaded_model_linear = utils_.load_model_compressed("model_linear")
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file PyVisionTeam17-0.0.5.tar.gz.
File metadata
- Download URL: PyVisionTeam17-0.0.5.tar.gz
- Upload date:
- Size: 38.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f1db24d5213870c7e31fb61a34b7dd170e298470a9128b5e387d36c07aac59d
|
|
| MD5 |
86f7bd2399c26ecdd4abb30c4ac88f4b
|
|
| BLAKE2b-256 |
587e1d6216cf0033cf80fd7d435f3020ebbc78ddd5976739259afdcb7abc48b1
|
File details
Details for the file PyVisionTeam17-0.0.5-py3-none-any.whl.
File metadata
- Download URL: PyVisionTeam17-0.0.5-py3-none-any.whl
- Upload date:
- Size: 19.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a11494e3d7f858c79f90144cb5d50f8829c25497252e3b57e8eb4261ecb869a5
|
|
| MD5 |
145cca7280046317ab90185fcda3be25
|
|
| BLAKE2b-256 |
eba169ee3c6404563ae366d63739dcc18b7815e7ea558dacfc116237c4c8c57a
|