Skip to main content

Deep Neural Network Library

Project description

It is for eliminating repeat jobs of machine learning. Also it can makes your code more beautifully and Pythonic.

Building Deep Neural Network

Please see my several examples. It contains below networks using MNIST dataset:

  • Logistic Regression

  • Association Learning

  • GAN: Generative Adversarial Network

  • VAE: Variational Autoencoder

  • AAE: Adversal Autoencoder

Data Normalization

Data normalization and standardization,

train_xs = net.normalize (train_xs, normalize = True, standardize = True)

To show cumulative sum of explained_variance_ratio_ of sklearn PCA.

train_xs = net.normalize (train_xs, normalize = True, standardize = True, pca_k = -1)

Then you can decide n_components for PCA.

train_xs = net.normalize (train_xs, normalize = True, standardize = True, axis = 0, pca_k = 500)

Test dataset will be nomalized by factors of train dataset.

test_xs = net.normalize (test_xs)

This parameters will be pickled at your train directory named as normfactors. You can use this pickled file for serving your model.

Export Model

To Saved Model

For serving model,

import mydnn

net = mydnn.MyDNN ()
net.restore ('./checkpoint')
version = net.to_save_model (
  './export',
  'predict_something',
  inputs = {'x': net.x},
  outputs={'label': net.label, 'logit': net.logit}
)
print ("version {} has been exported".format (version))

For testing your model,

from dnn import save_model

interpreter = save_model.load (model_dir, sess, graph)
y = interpreter.run (x)

You can serve the expoted model with TensorFlow Serving or tfserver.

Note: If you use net.normalize (train_xs), normalizing factors (mean, std, max and etc) willl be pickled and saved to model directory with tensorflow model. If you can use this file for normalizing new x data at real service.

from dnn import _normalize

def normalize (x):
  norm_file = os.path.join (model_dir, "normfactors")
  with open (norm_file, "rb") as f:
    norm_factor = pickle.load (f)
  return _normalize (x, *norm_factor)

To Tensorflow Lite Flat Buffer Model

  • Required Tensorflow version 1.9*

For exporting tensorflow lite you should convert your model to save model first.

net.to_tflite (
    "model.tflite",
    save_model_dir
)

If you want to convert to quntized model, it will be needed additional parameters.

net.to_tflite (
    "model.tflite",
    save_model_dir,
    True, # quantize
    (128, 128), # mean/std stats of input value
    (-1, 6) # min/max range output value of logit
)

For testing tflite model,

from dnn import tflite

interpreter = tflite.load ("model.tflite")
y = interpreter.run (x)

If your model is quantized, it need mean/std stats of input value,

from dnn import tflite

interpreter = tflite.load ("model.tflite", (128, 128))
y = interpreter.run (x)

If your input value range -1.0 ~ 1.0, its will be translated into 0 - 255 for qunatized model by mean and std parameters. So (128, 128) means your inout value range is -1.0 ~ 1.0. Then interpreter will qunatize x to uint8 by this parameter.

unit8 = (float32 x * std) + mean

And tflite will reverse this uinit8 to float value by,

float32 x = (uint8 x - mean) / std

Helpers

There’re several helper modules.

Generic DNN Model Helper

from dnn import costs, predutil

Data Processing Helper

from dnn import split, vector
import dnn.video
import dnn.audio
import dnn.image
import dnn.text

dnn Class Methods & Properties

You can override or add anything. If it looks good, contribute to this project please.

Predefined Operations & Creating

You should or could create these operations by overriding methods,

  • train_op: create with ‘make_optimizer’

  • logit: create with ‘DNN.make_logit’

  • cost: create with ‘DNN.make_cost’

  • accuracy: create with ‘DNN.calculate_accuracy’

Predefined Place Holders

  • dropout_rate: if negative value, dropout rate will be selected randomly.

  • is_training

  • n_sample: Numner of x (or y) set. This value will be fed automatically, do not feed.

Optimizers

You can use predefined optimizers.

def make_optimizer (self):
  return self.optimizer ("adam")
  # Or
  return self.optimizer ("rmsprob", mometum = 0.01)

Available optimizer names are,

  • “adam”

  • “rmsprob”

  • “momentum”

  • “clip”

  • “grad”

  • “adagrad”

  • “adagradDA”

  • “adadelta”

  • “ftrl”

  • “proxadagrad”

  • “proxgrad”

see dnn/optimizers.py

Model

  • save

  • restore

  • to_save_model

  • to_tflite

  • reset_dir

  • set_train_dir

  • eval

Tensor Board

  • set_tensorboard_dir

  • make_writers

  • write_summary

History

  • 0.3:

    • remove trainale ()

    • add set_learning_rate ()

    • add argument to set_train_dir () for saving chcekpoit

    • make compatible with tf 1.12.0

  • 0.2

    • add tensorflow lite conversion and interpreting

  • 0.1: project initialized

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dnn-0.3.1a6.tar.gz (118.1 kB view details)

Uploaded Source

File details

Details for the file dnn-0.3.1a6.tar.gz.

File metadata

  • Download URL: dnn-0.3.1a6.tar.gz
  • Upload date:
  • Size: 118.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Python-urllib/3.5

File hashes

Hashes for dnn-0.3.1a6.tar.gz
Algorithm Hash digest
SHA256 24550e0b92161970eb3245f5cf9fd6330a47121cb84ae0c4f31825f88c0404d7
MD5 3e1a5f0e6ad376f0576ccdc2b267be41
BLAKE2b-256 257e82a078bfd1f06fd335450ca5020f7ea20ce5ced584f002b82bcdc27bb574

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page