Skip to main content

Automation of the creation of the architecture of the neural network based on the input

Project description

Auto-Deep-Learning (Auto Deep Learning)

Downloads Version Python-Version issues PyPI - Status License

auto_deep_learning: with this package, you will be able to create, train and deploy neural networks automatically based on the input that you provide.

Alert

This package is still on development, but Start the Project to know further updates in next days! For the moment, would be for computer vision classification tasks on images (multi-modal included).

Installation

Use the package manager pip to install auto_deep_learning.

To install the package:

    pip install auto_deep_learning

If using an old version of the package, update it:

    pip install --upgrade auto_deep_learning

Project Structure

The project structure is the following one:

    ├── auto_deep_learning                  # Python Package
       ├── cloud                           # Cloud module for saving & service DL models
          ├── aws                         # Amazon Web Services
          └── gcp                         # Google Cloud
       ├── enum                            # Enumerations for the model
       ├── exceptions                      # Exceptions
          ├── model                       # Exceptions related to the definition/creation of the model
          └── utils                       # Exceptions related to the utilities folder
              └── data_handler            # Exceptions related to handling the data
       ├── model                           # Module for creating & training the models
          └── arch                        # Architectures supported of the models
              └── convolution
       ├── schemas                         # Schemas of expected outputs
       └── utils                           # Utilities for the project
           ├── data_handler                # Utilities related to handling the data
              ├── creator                 # Utilities related to creating the loaders
              └── transform               # Utilities related to the transformation of the data
           └── model                       # Utilities related to the creation of the model
    ├── examples                            # Examples of how the package can be used
    └── tests                               # Tests

Basic Usage

How easy can be to create and train a deep learning model:

    from auto_deep_learning import Model
    from auto_deep_learning.utils import DataCreator, DataSampler, image_folder_convertion

    df = image_folder_convertion()
    data = DataCreator(df)
    data_sampled = DatasetSampler(data)
    model = Model(data_sampled)
    model.fit()
    model.predict('image.jpg')

We provide also with a configuration object, where it centralizes some of the most important configurations that you might want to do:

    ConfigurationObject(
        n_epochs: int = 10,
        batch_size_train: int = 64,
        batch_size_valid: int = 128,
        batch_size_test: int = 128,
        valid_size: float = 0.1,
        test_size: float = 0.05,
        image_size: int = 224,
        num_workers: int = 6,
        objective: ModelObjective = ModelObjective.THROUGHPUT,
        img_transformers: Dict[str, ImageTransformer] =  {
            'train': ImageTransformer(
                rotation=3.0,
                color_jitter_brightness=3.0,
                color_jitter_contrast=3.0,
                color_jitter_hue=3.0,
                color_jitter_saturation=3.0,
                color_jitter_enabled=True,
                resized_crop_enabled=True
            ),
            'valid': ImageTransformer(),
            'test': ImageTransformer()
        }
    )

So by default, it is going to do image augmentation on the training data. Note that if for example we did not want to make a validation split because our dataset is too small, we would change this value as:

    conf_obj = ConfigurationObject()
    conf_obj.valid_size = 0.0

Dataset

The data that it expects is a pd.DataFrame(), where the columns are the following:

    - image_path: the path to the image
    - class1: the classification of the class nr. 1. For example: {t-shirt, glasses, ...}
    - class2: the classification of the class nr. 2. For example: {summer, winter, ...}
    - ...
    - split_type: whether it is for train/valid/test

For better performance, it is suggested that the classes and the type are of dtype category in the pandas DataFrame. If the type is not provided in the dataframe, you should use the utils function of data_split_types (in utils.dataset.sampler file).

If instead you have the images ordered in the structure of ImageFolder, which is the following structure:

    train/
        class1_value/
            1.jgp
            2.jpg
            ...
        class2_value/
            3.jpg
            4.jpg
            ...
    test/
        class1_value/
            1.jgp
            2.jpg
            ...
        class2_value/
            3.jpg
            4.jpg
            ...

For simplifying logic, we have provided a logic that gives you the expected dataframe that we wanted, with the function of image_folder_convertion (in utils.functions), where it is expecting a path to the parent folder where the train/ and /test folders are.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

auto_deep_learning-0.1.5.4.tar.gz (24.9 kB view details)

Uploaded Source

Built Distribution

auto_deep_learning-0.1.5.4-py3-none-any.whl (32.8 kB view details)

Uploaded Python 3

File details

Details for the file auto_deep_learning-0.1.5.4.tar.gz.

File metadata

  • Download URL: auto_deep_learning-0.1.5.4.tar.gz
  • Upload date:
  • Size: 24.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for auto_deep_learning-0.1.5.4.tar.gz
Algorithm Hash digest
SHA256 5ee06d26fcd6cff6289dd661072cfc47ab0e0d8c009a8e33acd80d7e539e4c1f
MD5 b8c191e141c24c091c4fe9ceb64ebbdd
BLAKE2b-256 0a158f08d084f1b7d0daaa14a0ddb9ee8611eba06e260e0a73af47c8c680e242

See more details on using hashes here.

File details

Details for the file auto_deep_learning-0.1.5.4-py3-none-any.whl.

File metadata

File hashes

Hashes for auto_deep_learning-0.1.5.4-py3-none-any.whl
Algorithm Hash digest
SHA256 56aaa0d123fb654ba78841b185cd064a9765109e2a41d63749d37b070956995a
MD5 1b26bc0b05cf1ca3f6b38aa9396ae4c6
BLAKE2b-256 0c8bf2302221ab584486958c27bd194d9dbb2cb9afcb2e9ca5733617a0d3457e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page