A framework of tools to structure, configure and drive deep learning projects
Project description
serotiny
While going about the work of building deep learning projects, several simultaneous problems seemed to emerge:
- How do we reuse as much work from previous projects as possible, and focus on building the part of the project that makes it distinct?
- How can we automate the generation of new models that are based on existing models, but vary in a crucial yet non-trivial way?
- When generating a multiplicity of related models, how can we keep all of the results, predictions, and analyses straight?
- How can the results from any number of trainings and predictions be compared and integrated in an insightful yet generally applicable way?
Serotiny arose from the need to address these issues and convert the complexity of deep learning projects into something simple, reproducible, configurable, and automatable at scale.
Serotiny is still a work-in-progress, but as we go along the solutions to these problems become more clear. Maybe you've run into similar situations? We'd love to hear from you.
Overview
serotiny
is a framework and set of tools to structure, configure and drive deep
learning projects, developed with the intention of streamlining the lifecycle of
deep learning projects at Allen Institute for Cell Science.
It achieves this goal by:
- Standardizing the structure of DL projects
- Relying on the modularity afforded by this standard structure to make DL projects highly configurable, using hydra as the framework for configuration
- Making it easy to adopt best-practices and latest-developments in DL infrastructure
by tightly integrating with
- Pytorch Lightning for neural net training/testing/prediction
- MLFlow for experiment tracking and artifact management
In doing so, DL projects become reproducible, easy to collaborate on and can benefit from general and powerful tooling.
Getting started
For more information, check our documentation, or jump straight into our getting started page, and learn how training a DL model can be as simple as:
$ serotiny train data=my_dataset model=my_model
Authors
- Guilherme Pires @colobas
- Ryan Spangler @prismofeverything
- Ritvik Vasan @ritvikvasan
- Caleb Chan @calebium
- Theo Knijnenburg @tknijnen
- Nick Gomez @gomeznick86
Citing
If you find serotiny useful, please cite this repository as:
Serotiny Authors (2022). Serotiny: a framework of tools to structure, configure and drive deep learning projects [Computer software]. GitHub. https://github.com/AllenCellModeling/serotiny
Free software: BSD-3-Clause
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file serotiny-0.0.9.dev202212122054.tar.gz
.
File metadata
- Download URL: serotiny-0.0.9.dev202212122054.tar.gz
- Upload date:
- Size: 52.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ea39a501cd3175f60c65adfca9d2578fef09e792fa3883092aafbc712cb6ebf9 |
|
MD5 | 323a48c7bf62bfca95a07518da0a35a6 |
|
BLAKE2b-256 | 04979430ebe759fda956c7161c8fad127d9014b52832644c919c42488b4afc39 |
File details
Details for the file serotiny-0.0.9.dev202212122054-py3-none-any.whl
.
File metadata
- Download URL: serotiny-0.0.9.dev202212122054-py3-none-any.whl
- Upload date:
- Size: 79.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5bec40d22abb6a3d83e7895f4eee6d45f0c287c0a8e959800909d3e5e620df99 |
|
MD5 | ad4f689340cb23fffa68dfb004d63eb4 |
|
BLAKE2b-256 | 2f7de2c44bc57bf0112f33985d2035798484da1b4ca7d09a8acbd336eba15f38 |