Schedule parameterized notebooks programmatically using cli or a REST API
Project description
LabFunctions
:books: Description
LabFunctions empowers different data roles to put notebooks into production, simplifying the time required to do so. It enables people to go from a data exploration instance to an entirely project deployed in production, using the same notebooks files made by a data scientist, analyst or whatever role working with data in an iterative way.
LabFunctions is a library and a service that allows you to run parametrized notebooks in a distributed way.
A Notebook could be launched remotly on demand, or could be scheduled by intervals or using cron syntax.
Internally it uses Sanic as web server, papermill as notebook executor, an RQ for task distributions and coordination.
:tada: Demo :tada:
:floppy_disk: Example project
:telescope: Philosophy
LabFunctions it insn't a complete MLOps solution.
We try hard to simply and expose the right APIs to the user for the part of scheduling notebooks with reproducibility in mind.
We also try to give the user the same freedom that lego tiles can gives, but we are opinated in the sense that all code, artifact, dependency and environment should be declareted
With this point of view, then:
- Git is neccesary :wink:
- Docker is necessary for environment reproducibility.
- Although you can push not versioned code, versioning is almost enforced, and is always a good practice in software development
The idea comes from a Netflix post which suggest using notebooks like an interface or a some kind of DSL to orchestrate different workloads like Spark and so on. But it also could be used to run entire process: like training a model, crawlings sites, performing etls, and so on.
The benefits of this approach is that notebooks runned could be stored and inspected for good or for bad. If something fails, is easy to run in a classical way: cell by cell.
The last point to clarify and it could challange the common sense or the way that we are used to use Jupyter's Notebooks, is that each notebook is more like a function definition with inputs and outputs, so a notebook potentially could be used for different purposes, hence the name of workflow, and indeed this idea is common in the data space. Then a workflow will be a notebook with params defined to be used anytime that a user wants, altering or not the parameters sent.
:nut_and_bolt: Features
- Define a notebook like a function, and execute it on demand or scheduling it
- Automatic Dockerfile generation. A project should share a unique environment but could use different versions of the same environment
- Execution History, Notifications to Slack or Discord.
- Cluster creation applying scaling policies by idle time or/and enqueued items
Cluster options
It is possible to run different cluster configurations with custom auto scalling policies
Instances inside a cluster could be created manually or automatically
See a simple demo of a gpu cluster creation
https://www.youtube.com/watch?v=-R7lJ4dGI9s
Installation
Server
Docker-compose
The project provides a docker-compose.yaml file as en example.
:construction: Note :construction:
Because NB Workflows will spawn docker instance for each workload, the installation inside docker containers could be tricky. The most difficult part is the configuration of the worker that needs access to the docker socket.
A Dockerfile is provided for customization of uid and guid witch should match with the local environment. A second alternative is expose the docker daemon through HTTP, if that is the case a DOCKER_HOST
env could be used, see docker client sdk
git clone https://github.com/nuxion/labfunctions
cd labfunctions
The next step is intializing the database and creating a user (please review the script first):
docker-compose postgres up -d
./scripts/initdb_docker.sh
Now you can start everything else:
docker-compose up -d
Without docker
pip install nb-workflows[server]==0.6.0
first terminal:
export NB_SERVER=True
nb manager db upgrade
nb manager users create
nb web --apps workflows,history,projects,events,runtimes
second terminal:
nb rqworker -w 1 -q control,mch.default
Before all that, redis postgresql and the nginx in webdav mode should be configurated
Client
Client:
pip install nb-workflows==0.6.0
nb startporject .
:earth_americas: Roadmap
See Roadmap draft
:post_office: Architecture
:bookmark_tabs: References & inspirations
- Notebook Innovation - Netflix
- Tensorflow metastore
- Maintainable and collaborative pipelines
- The magic of Merlin
- Scale aware approach
Contributing
Bug reports and pull requests are welcome on GitHub at the issues page. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.
License
This project is licensed under Apache 2.0. Refer to LICENSE.txt.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for labfunctions-0.9.0a5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bf306b603fc238f110371f3fa6e64eb9fd2a25d5d47d7490070039c28a4bd3e5 |
|
MD5 | 28cb482f8f17df74cdc2c7ce688e7174 |
|
BLAKE2b-256 | 971342864f0cf8d2a85a8313c52b4500e9d5fb49884b35863c664e5c4f8199b0 |