Skip to main content

Deep Neural Networks Library

Project description

It is for eliminating repeat jobs of machine learning. Also it can makes your code more beautifully and Pythonic.

Building Deep Neural Network

mydnn.py,

from dnn import dnn
import tensorflow as tf

class MyDNN (dnn.DNN):
  n_seq_len = 24
  n_channels = 1024
  n_output = 3

  def make_place_holders (self):
      self.x = tf.placeholder ("float", [None, self.n_seq_len, self.n_channels])
      self.y = tf.placeholder ("float", [None, self.n_output])

  def make_logit (self):
      # building neural network with convolution 1d, rnn and dense layers

      # 1d convoution (-1, 24, 1024) => (-1, 12, 2048)
      conv = self.make_conv1d_layer (self.x, 2048, activation = tf.nn.relu)

      # rnn
      output = self.make_lstm (
        conv, 12, 2048, hidden_size = 4096, lstm_layers = 2, activation = tf.tanh
      )

      # hidden dense layers
      layer = self.make_hidden_layer (output [-1], 1024, self.nn.relu)
      layer = self.make_hidden_layer (layer, 256, self.nn.relu)

      # finally, my logit
      return tf.layers.dense (inputs = layer, units = self.n_output)

  def make_cost (self):
      return tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits (
          logits = self.logits, labels = self.y
      ))

  def make_optimizer (self):
     return self.optimizer ("adam")

  def calculate_accuracy (self):
      correct_prediction = tf.equal (tf.argmax(self.y, 1), tf.argmax(self.logit, 1))
      return tf.reduce_mean (tf.cast (correct_prediction, "float"))

Sometimes it is very annoying to caculate complex accuracy with tensors, then can replace with calculate_complex_accuracy for calculating with numpy, python math and loop statement.

from dnn import dnn
import numpy as np

class MyDNN (dnn.DNN):
  def calculate_complex_accuracy (self, logit, y):
      return np.mean ((np.argmax (logit, 1) == np.argmax (y, 1)))

Training

Import mydnn.py,

import mydnn
from tqdm import tqdm

net = mydnn.MyDNN (gpu_usage = 0.4)
net.reset_dir ('./checkpoint')
net.trainable (
  start_learning_rate=0.0001,
  decay_step=500, decay_rate=0.99,
  overfit_threshold = 0.1
)
net.reset_tensor_board ("./logs")
net.make_writers ('Param', 'Train', 'Valid')

train_minibatches = split.minibatch (train_xs, train_ys, 128)
valid_minibatches = split.minibatch (test_xs, test_ys, 128)

for epoch in tqdm (range (1000)): # 1000 epoch
  batch_xs, batch_ys = next (train_minibatches)
  _, lr = net.run (
    net.optimizer, net.learning_rate,
    n_sample = len (batch_ys), x = batch_xs, y = batch_ys,
    dropout_rate = 0.5
  )
  net.write_summary ('Param', {"Learning Rate": lr})

  train_cost, train_logit = net.run (
    net.cost, net.logit,
    n_sample = len (batch_ys), x = batch_xs, y = batch_ys,
    dropout_rate = 0.0
  )
  train_acc = net.calculate_complex_accuracy (train_logit, batch_ys)
  net.write_summary ('Train', {"Accuracy": train_acc, "Cost": train_cost})

  vaild_xs, vaild_ys = next (valid_minibatches)
  valid_cost, valid_logit = net.run (
    net.cost, net.logit,
    n_sample = len (vaild_ys), x = vaild_xs, y = vaild_ys,
    dropout_rate = 0.0
  )
  valid_acc = net.calculate_complex_accuracy (valid_logit, vaild_ys)
  net.write_summary ('Valid', {"Accuracy": valid_acc, "Cost": valid_cost})

  # check overfit or save checkpoint if cost is the new lowest cost.
  if net.is_overfit (valid_cost, './checkpoint'):
      break

Training Multiple Models

You can train complete seperated models at same time.

Not like Multi Task Training, in this case models shares training data but there’re no shared layers between models. For example model A is logistic regression and B is calssification problem.

First of all, you give name to each models for saving checkpoint or tensorboard logging.

import mydnn

net1 = mydnn.ModelA (0.3, name = 'my_model_A')
net2 = mydnn.ModelB (0.2, name = 'my_model_B')

Next, y should be concated. Assume ModelA use first 4, and ModelB use last 3.

# y length is 7
y = [0.5, 4.3, 5.6, 9.4, 0, 1, 0]

Then combine models to MultiDNN

from dnn import multidnn

net = multidnn.MultiDNN (net1, 4, net2, 3)

And rest of code is very same as a single DNN case.

If you need exclude data from specific model, you can use filter function.

def exclude (ys, xs = None):
  nxs, nys = [], []
  for i, y in enumerate (ys):
      if np.sum (y) > 0:
          nys.append (y)
          if xs is not None:
              nxs.append (xs [i])
  return np.array (nys), np.array (nxs)
net1.set_filter (exclude)

Export Model

For serving model,

import mydnn

net = mydnn.MyDNN (gpu_usage = 0.4)
net.restore ('./checkpoint')
version = net.export (
  './export',
  'predict_something',
  inputs = {'x': net.x, 'dropout_rate': net.dropout_rate},
  outputs={'pred': net.pred}
)
print ("version {} has been exported".format (version))

Helpers

There’re several helper modules.

from dnn import split, costs, predutil, vector, optimizers

History

  • 0.1: project initialized

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dnn-0.1b8.tar.gz (23.0 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page