Skip to main content

Automatically designing CNN architectures using Genetic Algorithm for Image Classification implementation

Project description

AutoCNN

This project is an implementation of the paper "Automatically Designing CNN Architectures Using Genetic Algorithm for Image Classification"

How it works

This is an algorithm that is able to full automatically find an optimal CNN (Convolutional Neural Network) architecture.

There are two main building blocks to this algorithm:

Skip Layer

Skip Layer Image

Each Skip Layer comprises of:

The input first passes through:

  1. A convolution layer
  2. A batch normalization
  3. A ReLU activation layer
  4. Another convolution layer
  5. Another batch normalization layer

(all of the convolution layers have a kernel size of 3x3 and a stride of 1x1, the filter size is randomly chosen as a power of 2)

The input also passes through a convolution of kernel and stride size 1x1 with a filter size being the same as the last convolution. This "reshapes" the input in order to allow for element wise adding

The two outputs are combined in an add operation and then passed through a ReLU activation function

Pooling Layer

This is either a Max Pooling or a Average Pooling layer, the kernel and the stride size are 2x2

Layers

Layer Type Layer Documentation
Convolution tf.keras.layers.Conv2D
MaxPooling tf.keras.layers.MaxPool2D
AveragePooling tf.keras.layers.AveragePooling2D
Activation tf.keras.layers.Activation
Add tf.keras.layers.add
BatchNormalization tf.keras.layers.BatchNormalization

Steps

To do this the algorithm follows these steps:

  1. Create an random initial population
  2. Evaluate the fitness of the population by training the CNN
  3. Generating offsprings
    • 2 different CNN in the population are selected using a Binary Tournament Selection method
    • Given a certain probability a crossover between the two parents might happen
      • the two CNNs are split into two and two new CNN are created by mixing the parent "genes" -After all the new offsprings are created go through each of them and given a certain probability mutate the offspring
      • a mutation are:
        • add a skip layer: increases the complexity and depth of the network
        • add a pooling layer: increases the depth but might decrease the complexity due to the nature of pooling
        • remove a layer: reduce complexity and depth
        • randomize a layer: changes the parameters of a layer (i.e filter size, max or mean pooling)
  4. Evaluate the offspring fitness
  5. From the offsprings and the parent population generate a new population
    • Until N CNN have been selected:
      • randomly select 2 CNN, add the one with the highest fitness to the list
    • Look at if the CNN from the offspring and parent population was placed in the new population
      • if it is not replace the worst CNN by the best
  6. Repeat step 2.

Example

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Removes Tensorflow debuggin ouputs

import tensorflow as tf

tf.get_logger().setLevel('INFO') # Removes Tensorflow debugging ouputs

from auto_cnn.gan import AutoCNN

import random

# Sets the random seeds to make testing more consisent
random.seed(42)
tf.random.set_seed(42)


def mnist_test():
    # Loads the data as test and train 
    (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

    # Puts the data in a dictionary for the algorithm to use  
    data = {'x_train': x_train, 'y_train': y_train, 'x_test': x_test, 'y_test': y_test}

    # Sets the wanted parameters  
    a = AutoCNN(population_size=5, maximal_generation_number=4, dataset=data, epoch_number=5)

    # Runs the algorithm until the maximal_generation_number has been reached
    best_cnn = a.run()
    print(best_cnn)

if __name__ == '__main__':
    mnist_test()

Given these parameters the structure that was chosen was this:

MNIST Result

CNN: 128-64

Score (Test Accuracy): 0.9799000024795532

Contributing

If you have any idea on improving the performance, adding more customizations or correcting mistakes, please make a pull request or create an issue. I'd be happy to accept any contributions!

Project details


Release history Release notifications | RSS feed

This version

1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

auto_cnn-1.0.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

auto_cnn-1.0-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file auto_cnn-1.0.tar.gz.

File metadata

  • Download URL: auto_cnn-1.0.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/46.4.0 requests-toolbelt/0.9.1 tqdm/4.53.0 CPython/3.8.5

File hashes

Hashes for auto_cnn-1.0.tar.gz
Algorithm Hash digest
SHA256 a3c17dc16979624abf1b0bc3d939782d503d791a529c4011ef54e969d64d2331
MD5 5c952aa75eea01a0ecfd8bb07ba1fa31
BLAKE2b-256 209e1cda91a0b36298f5f1562e1b6cf657702e1a66709742fd1e944727e4efe1

See more details on using hashes here.

File details

Details for the file auto_cnn-1.0-py3-none-any.whl.

File metadata

  • Download URL: auto_cnn-1.0-py3-none-any.whl
  • Upload date:
  • Size: 12.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/46.4.0 requests-toolbelt/0.9.1 tqdm/4.53.0 CPython/3.8.5

File hashes

Hashes for auto_cnn-1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 173c018d97320bcc988fe7981b194b7649df30f79782dac125f55eb70aaea5ce
MD5 9b4ec15631444b6becd1cfdc3b438602
BLAKE2b-256 0f803cfe04e12ed154edc49906fb48d7fc7768b6ca514cb7d7e1823af674bbcb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page