Skip to main content

Manage training results, weights and data flow of your Tensorflow models

Project description


Manage your Data Pipline and Tensorflow & Keras models with MLPipe. It is NOT another "wrapper" around Tensorflow, but rather adds utilities to setup an environment to control data flow and managed trained models (weights & results) with the help of MongoDB.

>> pip install mlpipe-trainer

Setup - install MongoDB

MongoDB database is used to store trained Models including their weights and results. Additionally there is also a data reader for MongoDB implemented (basically just a generator as you know and love from using keras). Currenlty that is the only implemented data reader working "out of the box".
Follow the instructions on the MongoDB website for installation e.g. for Linux:

Code Examples


# The config is used to specify the localhost connections
# for saving trained models to the mongoDB as well as fetching training data
from mlpipe.utils import Config

Each Connection config consists of these fields in the .ini file


Data Pipline

from mlpipe.processors.i_processor import IPreProcessor
from mlpipe.data_reader.mongodb import MongoDBGenerator

class PreProcessData(IPreProcessor):
    def process(self, raw_data, input_data, ground_truth, piped_params=None):
        # Process raw_data to output input_data and ground_truth
        # which will be the input for the model
        return raw_data, input_data, ground_truth, piped_params

train_data = [...]  # consists of MongoDB ObjectIds that are used for training
processors = [PreProcessData()]  # Chain of Processors (in our case its just one)
# Generator that can be used e.g. with keras' fit_generator()
train_gen = MongoDBGenerator(
    ("connection_name", "cifar10", "train"),  # specify data source from a MongoDB

Data generators inherit from tf.keras.utils.Sequence. Check out this tensorflow docu to find out how you can write your custom generators (e.g. for other data sources than MongoDB).


As long as there is a keras (tensorflow.keras) model in the end, there are no restrictions on this step

model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=(32, 32, 3)))
model.add(Dense(10, activation='softmax'))

opt = optimizers.RMSprop(lr=0.0001, decay=1e-6)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=["accuracy"])

Training and Callbacks

from mlpipe.callbacks import SaveToMongoDB

save_to_mongodb_cb = SaveToMongoDB(("localhost_mongo_db", "models"), "test", model)


SaveToMongoDB is a custom keras callback class as described in the tensorflow docu. Again, feel free to create custom callbacks for any specific needs.
If, instead of fit_generator(), each batch is trained one-by-one e.g. with a native tensorflow model, you can still loop over the generator. Just remember to call the callback methods at the specific steps e.g. on_batch_end().

A full Cifar10 example can be found in the example folder here

Road Map

  • Create and generat MkDocs documentation & host documentation
  • Add tests
  • Set Up CI

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlpipe-trainer-0.5.1.tar.gz (17.0 kB view hashes)

Uploaded source

Built Distribution

mlpipe_trainer-0.5.1-py3-none-any.whl (23.0 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page