Skip to main content

R&D tools

Project description

Installation

pip install rndtools

Mini-framework for keras model training

When you call train_model function then this framework will do few useful things: 1. it creates dirs when it do not exist 1. it automatically save architecture into atrchitecture.json, 1. plot model graph 1. save python source code of get_model_function and training_function, 1. after each epoch draw loss and accuray chart, 1. after each epoch save csv file with learning history. 1. saves some meta information about model, for example: 1. processing time 1. best train and test loss 1. date of model creation

What you should do:

  1. Implement load_data function.

  2. Implement function that returns compiled keras model. Function should not have any parameters. Example:

    def get_model():
      model = Sequential()
      model.add(Dense(12, input_dim=8, init='uniform', activation='relu'))
      model.add(Dense(8, init='uniform', activation='relu'))
      model.add(Dense(1, init='uniform', activation='sigmoid'))
    
      model.compile(
        optimizer=Adam(),
        loss='binary_crossentropy',
        metrics=['accuracy']
      )
    
     return model
  3. Implement function that trains model. Function should return model history. Example: ``` def train(data, model, model_folder, callbacks=None): if callbacks is None: callbacks = []

    history = model.fit(data.X, data.Y, nb_epoch=150, batch_size=10, callbacks=callbacks)
    
    return history

    ``Pay attention to callbacks parameter. There are some extra callbacks that you should add to model callbacks. Also note that as indataparameter function pass whatload_data` function returns.

Example:

>>> import rndtools as rnd
>>> rnd.train.train_model(
    model_dir,
    get_model_function=get_model,
    training_function=train,
    loading_data_function=load_data
)

Model path: /home/rd/notebooks/documents-detector/damian/models/in_the_wild/unet_mrz/7

------------------------------
Creating dirs...
------------------------------
------------------------------
Creating and compiling model...
------------------------------
------------------------------
Saving architecture...
------------------------------
------------------------------
Plotting model...
------------------------------
------------------------------
Saving model source code...
------------------------------
------------------------------
Loading data...
------------------------------
------------------------------
Instantiating callbacks...
------------------------------
------------------------------
Training model...
------------------------------
Epoch 1/1000

Finished!

Dataset In Parts Generator

Sometimes there is so many data that it is problem to store it in memory. Then you can use divide your dataset into parts DatasetInPartsGenerator that will load this parts in turn, so you will have only part of dataset in memory.

Pipeline

Pipeline consists of steps. Each step is a tuple (name of step, step class). Example pipeline:

pipeline = Pipeline(
    (
        'grayscale',
        Grayscale()
    ),
    (
        'threshold',
        Threshold()
    ),
    (
        'blur',
        Blur(
            sigma=1.5
        )
    ),
    (
        'watershed',
        Watershed(
            min_distance=5,
            threshold_rel=0.1
        )
    ),
    show_progressbar=True
)

To create your own step just inherit from Step and implement transform method:

from rndtools.pipeline import Step

class CustomStep(Step):
    def transform(self, params):
        pass

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rndtools-3.0.7.tar.gz (17.5 kB view details)

Uploaded Source

File details

Details for the file rndtools-3.0.7.tar.gz.

File metadata

  • Download URL: rndtools-3.0.7.tar.gz
  • Upload date:
  • Size: 17.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for rndtools-3.0.7.tar.gz
Algorithm Hash digest
SHA256 7f1893cfaea9770d31be89cec58e97f867491888e131b3d412508a3b47da213e
MD5 497702652490328558b83e91520576a4
BLAKE2b-256 a7a20ab1fd03fccdf825ec865d33d97510aa035acb2d1e16b3992999a971f319

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page