Skip to main content

A Library for Easy Implementation of Deep learning Techniques

Project description

# Welcome to DarkNeuron !

Dark Neuron will deal with implementation of Automatic Deep Learning which can reduce the time and Complexity for non-technical users to train their own netwroks without Comprimising Accuracies for Classification of Images and Object Detection, most demanding tecniques for Autonomous Systems and Medical Fields.

" By augmenting human performance, AI has the potential to markedly improve productivity, efficiency, workflow, accuracy and speed, both for physicians and for patients … What I’m most excited about is using the future to bring back the past: to restore the care in healthcare. " - Eric Topol

CONTENTS :

Installation

pip install DarkNeurons

DarkNeuron Target Audience:

DarkNeuron is an Open Source Library . Target Audience are:

  • All Experienced Data Scientists who wants to increase productivity with reduction in complexities and time.
  • Professionals in Autonomous and Healthcare Industries to implement high accuracy models for the Production use.
  • Students of Data Science.

Classification of Images

DarkNeuron Classification has feature of implementing Pretrained Models on ImageNet Data. Users can directly train pretrained models or can retrain their own models. Models provided are:

  • InceptionV3
  • Xception
  • ResNet50
  • VGG16
  • VGG19 Further will be added on upcoming releases.

Initialization of Classification Model

Initialization of Classification Model of DarkNeuron requires working_directory as a main argument. It should have models and Raw_Data in it. It can be Initialized as below:

from DarkNeurons import Classify_Images
classify = Classify_Images(working_directory = "Working Directory")

Preparation of Data

Preparation of Data for Classification takes place in terms of whether the user wants to Train the Model or Predict from the Model and the Method of Importing Images:
  • Directory : To Import the whole folder with Images distributed in respective class folders.
  • DataFrame : Having a Dataframe containing Image filenames and corresponding Labels.
  • Point : To provide input as an array like : X_train , Y_train .
  • Image : To provide Single Image as an Input (suggested for Prediction Phase) Let's See each of them with their necessary arguments

Method: Directory

Code Syntax: (Continue from above....)
train,val,labels = classify.Preprocess_the_Image(method = 'directory', train =True,
			num_classes = 2, batch_size = 32, #Default
			target_image_size = (224,224,3) #Default
			model_name = 'InceptionV3',
			user_model = None, #Default,
			training_image_directory = 'Directory_Path
			validation_image_directory = None,
			)

Let's See each argument and their default values:

  • train: False (for Prediction) and True (Training)
  • num_classes : No of classes in user data (Default: 2)
  • batch_size : Batch Size of Training Data (Default:32)
  • target_image_size : Image Input Size used for Creation and Preprocessing of the Input
  • model_name: Name of Pretrained Model , not required when provide user_model name (Default: None)(Same for all Method)
  • user_model: user pretrained model , input taken as an model means Load the model using `classify.load_model(user_model_name)(Same for all Method)
  • training_image_directory: Full Path of Training Image Directory (Must for Training ) (Default: None)
  • validation_image_directory: Full Path of Validation Image Directory (Default=None)
  • test_image_directory: Test Image Directory path, only used when train = False

Method: DataFrame

Code Syntax:
train,val,labels = classify.Preprocess_the_Image(method = 'dataframe', train = True,
			num_classes = 2,batch_size = 32,
			dataframe = df ,
			x_col_name = 'filename',
			y_col_name = 'label',
			image_directory = None,
			split = 0.1 )

Let's Understand the above arguments:

  • dataframe: Loaded Dataframe variable ( refer Pandas for defining DataFrame)
  • x_col_name: name of Image file names containing column name
  • y_col_name: name of Labels containing column name
  • image_directory: Only required if x_col_name has relative path for images
  • split: Spliting of data automatically for validation and Training puroses

Method: Point

Code Example....
train,val,labels = classify.Preprocess_the_Image(method = 'point',train = True,
			x_train = x_train,y_train = y_train,
			x_test = x_test,y_test = y_test)

Let's Understand each argument:

  • x_train: Input X variable for Training
  • y_train: Target Y variable for Training
  • x_test: Input X variable for Testing and Validation
  • y_test: Target Y variable for Testing and Validation

Method: Image

Code Syntax:
test = classify.Preprocess_the_Image(method='image',train = False,
			image_path = 'Path of the Image',
			grayscale=False
			)
  • image_path: Path of the Image to predicted
  • grayscale: To load the image with grayscale feature

Model Creation

This Feature takes no argument , but necessary when user provide model_name .
It will create the full structure of the model based on the data provided in Prepare the Data function call.
model = classify.Create_the_Model()

That's it. Model will be created and Generated. If you have PreDownloaded weights, then must sure the following:

  • Put the model in working_directory
  • If Training is False: (Names of the model to be save)
    • InceptionV3 : 'inceptionv3_model.h5'
    • ResNet50 : 'resnet50_model.h5'
    • VGG16: 'vgg16_model.h5'
    • VGG19: 'vgg19_model.h5'
    • Xception: 'xception_model.h5'
  • If Training is True: (Names of the model to be save)
    • InceptionV3 : 'inceptionv3_notop_model.h5'
    • ResNet50 : 'resnet50_notop_model.h5'
    • VGG16: 'vgg16_notop_model.h5'
    • VGG19: 'vgg19_notop_model.h5'
    • Xception: 'xception_notop_model.h5'

Otherwise, it will automatically Download the weights.

Model Training

This Feature will be used for Model Training purposes . Code Syntax:

model = classify.Train_the_Model(model = model,
			    rebuild = False,
			    train_data_object = train,
			    validation_data_object = train,
			    epochs = 10,
			    optimizers = 'adam',
			    loss = 'binary_crossentropy',
			    fine_tuning = False,
			    layers = 20,
			    metrics = ['accuracy'],
			    validation_steps = 80,
			    steps_per_epoch = 50,
			    callbacks = None
			    )
  • model: model created from previous step.
  • rebuild: only, when model_name provided, set to True
  • train_data_object: generators get from Prepare the Data function
  • epochs: No of Steps for training
  • optimizer: Suitable Optimizer for model
  • loss: Loss function for model
  • fine_tuning: To do Fine_tuning or not
  • layers: Only required when Fine Tuning set to True, number of layers from bottom to be trained or to train all layers provide 'all' argument
  • metrics: To be provided as list.
  • callbacks: To be provided as list by the user for early_stopping or Checkpoint.

Prediction

This Feature will be used for Prediction from the model on the Test Dataset.
To do this step, First Prepare the Data with train argument set to False and obtain test object from it.

Code Syntax:
classify.Predict_from_the_Model(labels = labels,
				model = model,
				img = None,
				generator = None,
				top = 5
				)
  • labels: Labels provided as List or provided from generated labels in Prepare the data function during training. (See Above)
  • model: Model generated by Training or due to loading user own model.
  • img: only if method: image
  • generator: Test Data Object generated from Prepare the Data Function call.
  • top: Top k predictions for image.

Visualization of Predictions and Metrics

Metrics Visualization

classify.Visualize_the_Metrics()

Prediction Visualization

classify.Visualize_the_Predictions(number = 20)
  • number: No of Images or Predictions to Visualize

Here Comes the Ending to Classification Part

Let's move on to Object Detection Part

Object Detection (YOLOv4)

Initialization of Object Detection Model

This Function will take working directory as an argument where the training data is present and weights be present . If no weights are there then it will be downloaded.
If you have predefined yolov4 weights : Named it as --> 'yolov4.weights' If you have predefined yolov4 model: Named it as --> 'yolov4.h5'
from DarkNeuron import YOLOv4
yolo = YOLOv4( working_directory , output_directory)

Preparation of Data

For this Function, All Images and corressponding labels should be in working_Directory within no sub folder.( For Simplicity, Train directory = Working directory). This Function take file in three formats and converted them into YOLO Format Automatically:

  • csv
  • xml
  • text files

Code Syntax:

yolo.Prepare_the_Data(file_type,file_path,
		    dataframe_name = None,
		    class_file_name = None
		    )
  • file_type: This contain file_type: whether csv, xml, or text_files
  • file_path: This contain the path to data directory
  • dataframe_name: This should be given as name of csv file in working_directory
  • class_file_name: provide name of the class list as text file in working directory

Model Training

This Function will be used to Train the model on user custom data set.
There are two process involved :
  • Process_1 : Simple Training
  • Process_2 : After Process_1, Fine tuning (Highly Recommended)

Code Syntax:

yolo.Train_the_Yolo(model_name = 'yolov4.h5',
		    input_shape = (608,608) #Multiple of 32 required
		    score = 0.5,
		    iou = 0.5,
		    epochs1 = 50, #For Process 1
		    epochs2 = 51, #For Process 2
		    batch_size1 = 32,
		    batch_size2 = 4,
		    validation_split = 0.1,
		    process1 = True,
		    process2 = True
		   )
  • model_name: If user have predefine model, can provide the name.
  • input_shape: Input Shape for the model .
  • score: Score Threshold.
  • iou: Intersection Over Union thresholf over training (must change for better accuracy)
  • epochs1, epochs2: Epochs for Different Processes described above.
  • batch_size1, batch_size2: Batch Size for Differn Purposes
  • process1, process2: Process to be Done (Default: True)

Detection

This Function will be used to detect objects from video and Images. This Function has following features:
  • Web Cam Detection --> It will Detect using webcams and can also be used by Mobile Phone Cameras ( see IPWebCam )
  • Choose Class --> You can choose your own prediction classes , means which object to predict which to not. For Example, on COCO dataset , it has 80 labels, then you should pass person to the function, it will detect only person, leave everthing else as it is.

Code Synatax:

yolo.Detect(test_folder_name = 'test',
	    model_name = None,
	    cam = False,
	    videopath = 0,
	    classes = [],
	    score = 0.5,
	    tracking = False
	    )
  • test_folder_name: Test folder name in working directory ( images and video both, it will detect automatically and take actions according to it)
  • model_name: Model name saved in working_directory by Training, otherwise it will take yolov4.h5 by default.
  • cam: To enable Web Cam Detection
  • videopath: Path to the video to detect
  • classes: Selelctive choosing of Classes for Detections (Provide as List)
  • score: Threshold of Score for Prediction
  • tracking: DeepSort Tracking to be enable or not

Further Release

  • Improvement in Tracking
  • Artificial Neural Networks user friendly implementation
  • Visualization of Neural Networks

License

MIT License

Copyright (c) 2020 DarkNeuron Tushar-ml

Permission is hereby granted, free of charge, to any person obtaining a copy

of this software and associated documentation files (the "Software"), to deal

in the Software without restriction, including without limitation the rights

to use, copy, modify, merge, publish, distribute, sublicense, and/or sell

copies of the Software, and to permit persons to whom the Software is

furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all

copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE

AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,

OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

DarkNeurons-1.3.11.tar.gz (60.8 kB view details)

Uploaded Source

File details

Details for the file DarkNeurons-1.3.11.tar.gz.

File metadata

  • Download URL: DarkNeurons-1.3.11.tar.gz
  • Upload date:
  • Size: 60.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/46.4.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.5

File hashes

Hashes for DarkNeurons-1.3.11.tar.gz
Algorithm Hash digest
SHA256 054dca1364562a3333ee552f8ee35f16e64e21af37313cd168c27ed375efb127
MD5 281b3f96c18fb42aad789dc56a4945be
BLAKE2b-256 c26dc33ea74ba2507bfdc5dac7406311f9a8e0b17cdea0d8b205ef8d6e511af8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page