Skip to main content

image preprocessing

Project description

imagepreprocessing

A small library for speeding up the dataset preparation and model testing steps for deep learning on various frameworks. (mostly for me)

PyPI version fury.io PyPI download month Downloads GitHub license

What can it do

  • Creates all the required files for darknet-yolo3,4 training including cfg file with default parameters and class calculations in a single line. (example usage)
  • Creates train ready data for image classification tasks for keras in a single line.(example usage)
  • Makes multiple image prediction process easier with using keras model from both array and directory.
  • Predicts and saves multiple images on a directory with using darknet.
  • Includes a simple annotation tool for darknet-yolo style annotation. (example usage)
  • Auto annotation by given random points for yolo.(example usage)
  • Draws bounding boxes of the images from annotation files for preview.
  • Plots training history graph from keras history object.(example usage)
  • Plots confusion matrix.(example usage)

This dataset structure is required for most of the operations

my_dataset
   |----class1
   |     |---image1.jpg
   |     |---image2.jpg
   |     |---image3.jpg
   |     ...
   |----class2
   |----class3
         ...

Install

pip install imagepreprocessing

Create required files for training on darknet-yolo

from imagepreprocessing.darknet_functions import create_training_data_yolo
main_dir = "datasets/my_dataset"
create_training_data_yolo(main_dir)

Create training data for keras

from  imagepreprocessing.keras_functions import create_training_data_keras
source_path = "datasets/my_dataset"
save_path = "5000images_on_one_file"
train_x, train_y, valid_x, valid_y = create_training_data_keras(source_path, save_path = save_path, image_size = (299,299), validation_split=0.1, percent_to_use=0.5, grayscale = True)

Make multiple image predictions from directory with keras model

from  imagepreprocessing.keras_functions import make_prediction_from_directory_keras
predictions = make_prediction_from_directory_keras("datasets/my_dataset/class1", "models/alexnet.h5")

Create training history graph for keras

from  imagepreprocessing.keras_functions import create_history_graph_keras

# training
# history = model.fit(...)

create_history_graph_keras(history)

trainig_histyory_example

Make prediction from test array and create the confusion matrix with keras model

from  imagepreprocessing.keras_functions import create_training_data_keras, make_prediction_from_array_keras
from  imagepreprocessing.utilities import create_confusion_matrix, train_test_split

images_path = "datasets/my_dataset"

# Create training data split the data
x, y, x_val, y_val = create_training_data_keras(images_path, save_path = None, validation_split=0.2, percent_to_use=0.5)

# split training data
x, y, test_x, test_y =  train_test_split(x,y,save_path = save_path)

# ...
# training
# ...

class_names = ["apple", "melon", "orange"]

# make prediction
predictions = make_prediction_from_array_keras(test_x, model, print_output=False)

# create confusion matrix
create_confusion_matrix(predictions, test_y, class_names=class_names, one_hot=True)
create_confusion_matrix(predictions, test_y, class_names=class_names, one_hot=True, cmap_color="Blues")

confusion_matrix_exampleconfusion_matrix_example

Make multi input model prediction and create the confusion matrix

from imagepreprocessing.keras_functions import create_training_data_kera
from  imagepreprocessing.utilities import create_confusion_matrix, train_test_split
import numpy as np

# Create training data split the data and split the data
source_path = "datasets/my_dataset"
x, y = create_training_data_keras(source_path, image_size=(28,28), validation_split=0, percent_to_use=1, grayscale=True, convert_array_and_reshape=False)
x, y, test_x, test_y = train_test_split(x,y)

# prepare the data for multi input training and testing
x1 = np.array(x).reshape(-1,28,28,1)
x2 = np.array(x).reshape(-1,28,28)
y = np.array(y)
x = [x1, x2]

test_x1 = np.array(test_x).reshape(-1,28,28,1)
test_x2 = np.array(test_x).reshape(-1,28,28)
test_y = np.array(test_y)
test_x = [test_x1, test_x2]

# ...
# training
# ...

# make prediction
predictions = make_prediction_from_array_keras(test_x, model, print_output=False, model_summary=False, show_images=False)

# create confusion matrix
create_confusion_matrix(predictions, test_y, class_names=["0","1","2","3","4","5","6","7","8","9"], one_hot=True)

Create required files for training on darknet-yolo and auto annotate images by center

Auto annotation is for testing the dataset or just for using it for classification, detection won't work without proper annotations.
from imagepreprocessing.darknet_functions import create_training_data_yolo, auto_annotation_by_random_points
import os

main_dir = "datasets/my_dataset"

# auto annotating all images by their center points (x,y,w,h)
folders = sorted(os.listdir(main_dir))
for index, folder in enumerate(folders):
    auto_annotation_by_random_points(os.path.join(main_dir, folder), index, annotation_points=((0.5,0.5), (0.5,0.5), (1.0,1.0), (1.0,1.0)))

# creating required files
create_training_data_yolo(main_dir)

Annotation tool for derknet-yolo

from imagepreprocessing.darknet_functions import yolo_annotation_tool
yolo_annotation_tool("test_stuff/images", "test_stuff/obj.names")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

imagepreprocessing-1.6.1.tar.gz (20.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page