Argo-Workflow backend extension for Jupyter-Scheduler.
Project description
argo-jupyter-scheduler
Table of Contents
Argo-Jupyter-Scheduler
Submit longing running notebooks to run without the need to keep your JupyterLab server running. And submit a notebook to run on a specified schedule.
Installation
pip install argo-jupyter-scheduler
What is it?
Argo-Jupyter-Scheduler is a plugin to the Jupyter-Scheduler JupyterLab extension.
What does that mean?
This means this is an application that gets installed in the JupyterLab base image and runs as an extension in JupyterLab. Specifically, you will see this icon at the bottom of the JupyterLab Launcher tab:
And this icon on the toolbar of your Jupyter Notebook:
This also means, as a lab extension, this application is running within each user's separate JupyterLab server. The record of the notebooks you've submitted is specific to you and you only. There is no central Jupyter-Scheduler.
However, instead of using the base Jupyter-Scheduler, we are using Argo-Jupyter-Scheduler.
Why?
If you want to run your Jupyter Notebook on a schedule, you need to be assured that the notebook will be executed at the times you specified. The fundamental limitation with Jupyter-Scheduler is that when your JupyterLab server is not running, Jupyter-Scheduler is not running. Then the notebooks you had scheduled won't run. What about notebooks that you want to run right now? If the JupyterLab server is down, then how will the status of the notebook run be recorded?
The solution is Argo-Jupyter-Scheduler: Jupyter-Scheduler front-end with an Argo-Workflows back-end.
A deeper dive
In the Jupyter-Scheduler lab extension, you can create two things, a Job
and a Job Definition
.
Job
A Job
, or notebook job, is when you submit your notebook to run.
In Argo-Jupyter-Scheduler, this Job
translates into a Workflow
in Argo-Workflows. So when you create a Job
, your notebook job will create a Workflow that will run regardless of whether or not your JupyterLab server is.
At the moment, permission to submit Jobs is required, managed by the Keycloak roles for the
argo-server-sso
client. If your user has either theargo-admin
or theargo-developer
roles, they will be permitted to create and submit Jobs (and Job Definitions).
We are also relying on the Nebari Workflow Controller to ensure the user's home directory and conda-store environments are mounted to the Workflow. This allows us to ensure:
- the files in the user's home directory can be used by the notebook job
- the output of the notebook can be saved locally
- when the conda environment that is used gets updated, it is also updated for the notebook job (helpful for scheduled jobs)
- the node-selector and image you submit your notebook job from are the same ones used by the workflow
Job Definition
A Job-Definition
is simply a way to create to Jobs that run on a specified schedule.
In Argo-Jupyter-Scheduler, Job Definition
translate into a Cron-Workflow
in Argo-Worflows. So when you create a Job Definition
, you create a Cron-Workflow
which in turn creates a Workflow
to run when scheduled.
A Job
is to Workflow
as Job Definition
is to Cron-Workflow
.
Internals
Jupyter-Scheduler creates and uses a scheduler.sqlite
database to manage and keep track of the Jobs and Job Definitions. If you can ensure this database is accessible and can be updated when the status of a job or a job definition change, then you can ensure the view the user sees from JupyterLab match is accurate.
By default this database is located at
~/.local/share/jupyter/scheduler.sqlite
but this is a trailet that can be modified. And since we have access to this database, we can update the database directly from the workflow itself.
To acommplish this, the workflow runs in two steps. First the workflow runs the notebook, using papermill
and the conda environment specified. And second, depending on the success of this notebook run, updates the database with this status.
And when a job definition is created, a corresponding cron-workflow is created. To ensure the database is properly updated, the workflow that the cron-workflow creates has three steps. First, create a job record in the database with a status of IN PROGRESS
. Second, run the notebook, again using papermill
and the conda environment specified. And third, update the newly created job record with the status of the notebook run.
Additional Thoughts
At the moment, Argo-Jupyter-Scheduler is closely coupled with Nebari (via the Nebari-Workflow-Controller) which doesn't make it very useable for other projects. There's no need for this to necessarily be the case. By leveraging Traitlets, we can include other ways of modifying the pod spec for the running workflow and enable it to be used by other projects. If you're interested in this project and would like to see it extended, feel free to open an issue to discuss your ideas. Thank you :)
Known Issues
All of the core features of Jupyter-Scheduler have been mapped over to Argo-Jupyter-Scheduler. Unfortunately, there is currently a limitation with Update Job Definition
and with Pause
/Resume
for Job Definitions. Although the Pause
works, the Resume
fails for the same reason Update Job Definition
does and this is because the upstream Nebari-Workflow-Controller (see Known Limitations) has a limitation whereby it can't resubmit workflows/cron-workflows; there are more details in this issue.
License
argo-jupyter-scheduler
is distributed under the terms of the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file argo_jupyter_scheduler-2023.7.1.tar.gz
.
File metadata
- Download URL: argo_jupyter_scheduler-2023.7.1.tar.gz
- Upload date:
- Size: 15.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e27baf46305095fd3b87ba8071917412be1a51e2964eaecae59c03e9e35fc4b5 |
|
MD5 | 897191fd1f459eda3ffe4447a2982c14 |
|
BLAKE2b-256 | 445085945892a661f0f840da315f37f30e735da33b86e1218a15eb1cffa75ce4 |
File details
Details for the file argo_jupyter_scheduler-2023.7.1-py3-none-any.whl
.
File metadata
- Download URL: argo_jupyter_scheduler-2023.7.1-py3-none-any.whl
- Upload date:
- Size: 13.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3d6afa420f1b413bb8b1cd9b211e21aa0484bf170cf4e1f750bfb9aa77968403 |
|
MD5 | bc34fd7b23047f107a514cdded058e96 |
|
BLAKE2b-256 | 001a61f03cea3d0ca1e8f4d36089f5a056160413b5c72df6c3ac44b6a83d3974 |