A Deep Learning library built from scratch using Python and NumPy
Project description
neowise Documentation
Steps to train your own Neural Network using neowise
-
Install and
import neowise as nw
-
Get your data and pre-process it. Your
data
should be in the dimensions as(number_of_examples, number_of_features)
andlabels
should have(number_of_output_units, number_of examples)
as its dimensions. This is a must, any changes here would raise errors! -
Create a model by calling
model = nw.Model(your_train_data, your_train_labels, your_test_data, your_test_labels, your_crossval_data, your_crossval_labels)
If you do not have Cross Validation data, enterNone
for the last two arguments. -
Add layers to your model by
model.add(layer_name,num_inputs,num_outputs,activation_function,dropout)
, where you give a unique name to each of your layers so that you know what type of layer it is. Example fordense
layer, if it is your first layer, name itdense1
. Enter the number of inputs to that layer, innum_inputs
and number of units for that layer innum_outputs
. For activation_function use any of the following supported activation functions ["relu", "sigmoid", "tanh", "softmax", "sine"]. To prevent overflows and nans, we suggest that if you use a softmax classifier, to set the activation of the previous layer of the output layer as "tanh" as it squishes values between -1 and 1, thus preventing catastrophe. If you want to use Dropout, set thedropout
anywhere between 0 and 1. Else, the default value is taken as 1, i.e no Dropout. -
Just to be sure of your architecture and to know the amount of parameters that'll be trained call
model.summary()
which uses prettytable to print out a summary of your architecture. -
Train your model using
model.fit(your_train_data, your_train_labels, learning_rate, number_of_iterations, optimizer, problem_type, mini_batch_size, regularization_type, lambda, learning_rate_decay)
, where you enter your training data, choose the learning rate, set the number of iterations to train for, choose which type of optimizer you want from ["GD" for Gradient Descent, "Momentum" for Gradient Descent using Momentum, "RMSprop" for Root Mean Square Propagation, "Adam" for Adaptive Moment Estimation] If you want to train using Batch Gradient Descent, choose "GD" asoptimizer
and set themini_batch_size
to the total number of examples in your training data and if you want to train using Stochastic Gradient Descent, choose "GD" asoptimizer
and setmini_batch_size
to 1. Forproblem_type
choose any of the currently supported tasks ["Binary" for Binary Classification Tasks, "Multi" for Multi Class Classification Tasks] Setmini_batch_size
to the size you want each of your mini batches to be and be sure that this value is less than the total number of examples you have. If you want to use L1 or L2 regularization choose from ["L1" for L1 Regularization and "L2" for L2 Regularization] and set the regularization parameterlambda
. If you want yourlearning_rate
to not be constant and be decreasing as the training progresses, setalpha_decay
to True -
Test the model on your test data by calling
model.test(your_test_data, your_test_labels, problem_type)
on your training data and setproblem_type
as you did formodel.fit
. This displays a prettytable with precision, recall and f1 scores and its accuracy on the test data. -
For plotting the costs and accuracy vs number of iterations, call
model.plot(type_function, animate, directory, frequency)
and settype_function
to "Cost" if you want to plot costs vs number of iterations and "Accuracy" for accuracy vs number of iterations. If you want to create animated graphs, setanimate
to True and specify the directory in which you want to save the plots indirectory
and set the frequency with which the graphs should update infrequency
, then feed those images to a GIF creator to create animated plots. -
To save the model, call
model.save_model(file_name)
and specify the directory in which you want to save the model with the name of the model with the extension .h5 infilename
-
To load a previously saved model, create a new model be calling
saved_model = nw.Model(your_train_data, your_train_labels, your_test_data, your_test_labels, your_crossval_data, your_crossval_labels)
, where these are the same data on which model was trained on.Call saved_model.load_model(file_name)
to load the model from the directory specified infile_name
-
These are the functionalities that neowise offers. For detailed doc_strings visit Source Code to find more about the project!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file neowise-0.1.0.tar.gz
.
File metadata
- Download URL: neowise-0.1.0.tar.gz
- Upload date:
- Size: 14.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.3.1 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.7.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 95f0a270d87c01b0177ad9f1a143640bc931ccd36207a9c71f8187b65ff5b5d5 |
|
MD5 | d20819cf0ea45fd85f3ae7f0cd6e546b |
|
BLAKE2b-256 | e78e8fb99c8a56d2dd1341c348d7080d626e6ccad69f4eb1486975e65a0a3e70 |
File details
Details for the file neowise-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: neowise-0.1.0-py3-none-any.whl
- Upload date:
- Size: 16.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.3.1 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.7.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 11e21ce713199fbe4cc0b920f1f03fce03d03fbee60839164a0e45a797bb2400 |
|
MD5 | 5f59ebe79252692bc944c7dc87415334 |
|
BLAKE2b-256 | ea13d7ce23080e17f01bc12afa51eaab03394ce8db506dbabf43f3855872a224 |