A framework to define a machine learning pipeline
Project description
Note that the this documentation is a tad bit outdated. Will be updating as soon as I can
ml-pipeline
I use this pipeline to simplify my life when working on ML projects.
Installation
This can be installed using pip
pip install mlpipeline
Usage (tl;dr version)
- Extend
mlpipeline.helper.Experimentandmlpipeline.helper.Dataloaderto suit your needs. - Define the versions using the interface provided by
mlpipeline.utils.Versions.- Version parameters that must be defined:
mlpipeline.utils.version_parameters.NAMEmlpipeline.utils.version_parameters.DATALOADERmlpipeline.utils.version_parameters.BATCH_SIZEmlpipeline.utils.version_parameters.EPOC_COUNT
- Version parameters that must be defined:
- Place the script(s) containing above in a specified directory.
- Add the directory to
mlp.config - Add the name of the script to the
experiments.config - (optional) Add the name of the script to the
experiments_test.config - (optional) Run the experiment in test mode to ensure the safety of your sanity.
mlpipeline
- Execute the pipeline
mlpipeline -r -u
- Anything saved to the
experiment_dirpassed through themlpipeline.utils.Experiment.train_loopandmlpipeline.utils.Experiment.evaluate_loopwill be available to access. The output and logs can be found inoutputs/log-<hostname>andoutputs/output-<hostname>files relative to the directory in 3. above.
Usage (Long version)
Experiment scripts
The experiment script is a python script that contain a global variable EXPERIMENT which holds an mlpipeline.helper.Experiment object. Ideally, one would extend the mlpipeline.helper.Experiment class and implement it's methods to perform the intended tasks (Refer documentation in mlpipeline.helper for more details).
Place experiment scripts in a separate folder. Note that this folder can be anywhere in your system. Add the path to the folder in which the code is placed in the mlp.config file.
The directory structure recommended to use in this case would be as follows:
/<project>
/experiment
<experimentscripts>
mlp.config
experiments.config
experiments_test.config
The mlpipeline will be executed from the directory.
For example: A sample experiment can be seen in examples/sample-project/experiments/sample_experiment.py. The default mlp.config file has points to the experiments folder. The examples/sample-project/ is a sample directory structure for a project.
Versions (I should choose a better term for this)
mlpipeline.utils.version_parameters.NAME: This is a string used to keep track of the training and history and this name will be appended to the logs and outputs. This parameters must be set for each version.mlpipeline.utils.version_parameters.DATALOADER: Anmlpipeline.helper.DataLoaderobject. Simply put, it is a wrapper for a dataset. You'll have extend themlpipeline.helper.DataLoaderclass to fit your needs. This object will be used by the pipeline to infer details about a training process, such as the number of steps (Refer documentation in mlpipeline.helper for more details). As of the current version of the pipeline, this parameter is mandatory.mlpipeline.utils.version_parameters.EXPERIMENT_DIR_SUFFIX: Each version of the experiment that's completed the training loop will be allocated a directory which can be used to save outputs (e.g. checkpoint files). When a experiment is being trained with a different set of versions ifallow_delete_experiment_diris set toTruein theEXPERIMENT, the directory will be cleared as defined inmlpipeline.helper.Experiment.clean_experiment_dir(Note that the behaviour of this function is not implemented by default to avoid a disaster). Some times you may want to have different directories to for each version of the experiment, in such a case, pass a string to this parameter, which will be appended to the directory name.mlpipeline.utils.version_parameters.BATCH_SIZE: The batch size used in the experiment's training loop. As of the current version of the pipeline, this parameter is mandatory.mlpipeline.utils.version_parameters.EPOC_COUNT: The number of epocs that will be used. As of the current version of the pipeline, this parameter is mandatory.mlpipeline.utils.version_parameters.ORDER: This is set to ensure the versions are loaded in the order they are defined. This value can be passed to a version to override this behaviour.
Executing experiments
You can have any number of experiments in the experiments folder. Add the names of the scripts to the experiments.config file. If the use_blacklist is false, only the scripts whose names are under [WHITELISTED_EXPERIMENTS] will be executed. if it is set to true all scripts except the ones under the [BLACKLISTED_EXPERIMENTS] will be executed. Note that experiments can be added or removed (assuming it has not been executed) to the execution queue while the pipeline is running. That is after each experiment is executed, the pipeline will re-load the config file.
You can execute the pipeline by running the python script:
python pipeline.py
Note: this will run the pipeline in test mode (Read The two modes for more information)
Outputs
The outputs and logs will be saved in files in a folder named outputs in the experiments folder. There are two files the user would want to keep track of (note that the <hostname> is the host name of the system on which the pipeline is being executed):
log-<hostname>: This file contains the logsoutput-<hostname>: This file contains the output results of each "incarnation" of a experiment.
Note that the other files are used by the pipeline to keep track of training sessions previously launched.
The two modes
The pipeline can be executed in two modes: test mode and execution mode. When you are developing a experiment, you'd want to use the test mode. The pipeline when executed without any additional arguments will be executed in the test mode. Note that the test mode uses it's own config file experiments_test.config, that functions similar to the experiments.config file. To execute in execution mode, pass -r to the above command:
python pipeline.py -r
Differences between test mode and execution mode (default behaviour):
| Test mode | Execution mode |
|---|---|
Uses experiments_test.config |
Uses experiments.config |
| The experiment directory is a temporary directory which will be cleared each time the experiment is executed | The experiment directory is a directory defined by the name of the experiment and versions EXPERIMENT_DIR_SUFFIX |
| If an exception is raised, the pipeline will halt is execution by raising the exception to the top level | Any exception raised will not stop the pipeline, the error will be logged and the pipeline will continue process with other versions and experiments |
| No results or logs will be recorded in the output files | All logs and outputs will be recorded in the output files |
Extra
I use an experiment log to maintain the experiments, which kinda ties into how I use the pipeline. For more info on that: Experiment log - How I keep track of my ML experiments
The practices I have grown to follow are described in this post: Practices I follow with the machine learning pipeline
Other projects that address similar problems (I'd be trying to combine them in the future iterations of the pipeline):
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mlpipeline-1.1a3.post12.tar.gz.
File metadata
- Download URL: mlpipeline-1.1a3.post12.tar.gz
- Upload date:
- Size: 22.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.8.0 tqdm/4.24.0 CPython/3.7.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
576f80f2b446bd13ee2dee8cebb8e2835ea68253591198028e59d664bbd7e88b
|
|
| MD5 |
4e26fa900e199ad42c992c75137bb15f
|
|
| BLAKE2b-256 |
976750e372e099ce329923f833c29827b896b6d6e9e6f6a4093bf95f4a3d84a7
|
File details
Details for the file mlpipeline-1.1a3.post12-py3-none-any.whl.
File metadata
- Download URL: mlpipeline-1.1a3.post12-py3-none-any.whl
- Upload date:
- Size: 24.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.8.0 tqdm/4.24.0 CPython/3.7.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f18e21d7bcdb947eeee8778927cbee19d0e87f42aa8cdcda1ac44747c57c41f2
|
|
| MD5 |
bf572447c1af0cdb36e4fa78c802a470
|
|
| BLAKE2b-256 |
2d39f07c81c6f9d6dfe612a153e6d3537179e0e15b861b7d4268e41d02878a8b
|