A python library for easy connecting between Omero (jobs) and a Slurm cluster
Project description
Omero Slurm Client library
The omero_slurm_client
Python package is a library that facilitates working with a Slurm cluster in the context of the Omero platform.
The package includes the SlurmClient
class, which extends the Fabric library's Connection
class to provide SSH-based connectivity and interaction with a Slurm cluster. The package enables users to submit jobs, monitor job status, retrieve job output, and perform other Slurm-related tasks.
Additionally, the package offers functionality for configuring and managing paths to Slurm data and Singularity images, as well as specific image models and their associated repositories.
Overall, the omero_slurm_client
package simplifies the integration of Slurm functionality within the Omero platform and provides an efficient workflow for working with Slurm clusters.
Quickstart
For a quick overview of what this library can do for you, we can install an example setup locally with Docker:
- Setup a local Omero w/ this library:
- Follow Quickstart of https://github.com/TorecLuik/docker-example-omero-grid-amc
- Setup a local Slurm w/ SSH access:
- Follow Quickstart of https://github.com/TorecLuik/slurm-docker-cluster
- Upload some data with OMERO.insight to
localhost
server - Try out some scripts from https://github.com/NL-BioImaging/omero-slurm-scripts (already installed!):
- Run script
slurm/init/SLURM Init environment...
- Get a coffee or something. This will take at least 10 min to download all the workflow images. Maybe write a nice review on
image.sc
of this software, or here on theDiscussions
tab of Github. - Select your image / dataset and run script
slurm/workflows/SLURM Run Workflow...
- Select at least one of the
Select how to import your results
, e.g. changeImport into NEW Dataset
text tohello world
- Select a fun workflow, e.g.
cellpose
.- Change the
nuc channel
to the channel to segment - Uncheck the
use gpu
unless you setup a nice Slurm w/ GPU
- Change the
- Refresh your Omero
Explore
tab to see yourhello world
dataset with a mask image when the workflow is done.
- Select at least one of the
- Run script
Prerequisites & Getting Started
Slurm Requirements
Note: This library has only been tested on Slurm versions 21.08.6 and 22.05.09 !
Your Slurm cluster/login node needs to have:
- SSH access w/ public key (headless)
- SCP access (generally comes with SSH)
- 7zip installed
- Singularity/Apptainer installed
- (Optional) Git installed
Omero Requirements
Your Omero processing node needs to have:
- SSH client and access to the Slurm cluster (w/ private key / headless)
- SCP access to the Slurm cluster
- Python3.6+
- This library installed (
python3 -m pip install 'git+https://github.com/NL-BioImaging/omero-slurm-client'
) - Configuration setup at
/etc/slurm-config.ini
- Requirements for some scripts:
python3 -m pip install ezomero==1.1.1 tifffile==2020.9.3
Your Omero server node needs to have:
- Some Omero example scripts installed to interact with this library:
- My examples on github:
https://github.com/NL-BioImaging/omero-slurm-scripts
- Install those at
/opt/omero/server/OMERO.server/lib/scripts/slurm/
, e.g.git clone https://github.com/NL-BioImaging/omero-slurm-scripts.git <path>/slurm
- My examples on github:
Getting Started
To connect an Omero processor to a Slurm cluster using the omero_slurm_client
library, users can follow these steps:
-
Setup passwordless public key authentication between your Omero
processor
server and your HPC server. E.g. follow a SSH tutorial or this one.- You could use 1 Slurm account for all
processor
servers, and share the same private key to all of them. - Or you could use unique accounts, but give them all the same alias in step 2.
- You could use 1 Slurm account for all
-
Create a SSH config file named
config
in the.ssh
directory of (all) the Omeroprocessor
servers, within theomero
user's home directory (~/.ssh/config
). This file should specify the hostname, username, port, and private key path for the Slurm cluster, under some alias. This alias we will provide to the library. We provide an example in the resources directory.-
This will allow a uniform SSH naming, and makes the connection headless; making it easy for the library.
-
Test the SSH connection manually!
ssh slurm
(as the omero user) should connect you to the Slurm server (given that you named itslurm
in theconfig
). -
Congratulations! Now the servers are connected. Next, we make sure to setup the connection between Omero and Slurm.
-
-
At this point, ensure that the
slurm-config.ini
file is correctly configured with the necessary SSH and Slurm settings, including the host, data path, images path, and model details. Customize the configuration according to the specific Slurm cluster setup. We provide an example in the resources section. To read it automatically, place thisini
file in one of the following locations (on the Omeroprocessor
server):/etc/slurm-config.ini
~/slurm-config.ini
-
Install Omero scripts from Omero Slurm Scripts, e.g.
cd OMERO_DIST/lib/scripts
git clone https://github.com/NL-BioImaging/omero-slurm-scripts.git slurm
-
To finish setting up your
SlurmClient
and Slurm server, run it once withinit_slurm=True
. This is provided in a Omero script form at init/Slurm Init environment , which you just installed in previous step.- Provide the configfile location explicitly if it is not a default one defined earlier, otherwise you can omit that field.
- Please note the requirements for your Slurm cluster. We do not install Singularity / 7zip on your cluster for you (at the time of writing).
- This operation will make it create the directories you provided in the
slurm-config.ini
, pull any described Singularity images to the server (note: might take a while), and generate (or clone from Git) any job scripts for these workflows:
with SlurmClient.from_config(configfile=configfile,
init_slurm=True) as slurmClient:
slurmClient.validate(validate_slurm_setup=True)
With the configuration files in place, you can utilize the SlurmClient
class from the Omero-Slurm-client library to connect to the Slurm cluster over SSH, enabling the submission and management of Slurm jobs from an Omero processor.
OMERO.scripts
The easiest interaction from Omero with this library currently is through OMERO.scripts.
We have provided example Omero scripts of how to use this in https://github.com/NL-BioImaging/omero-slurm-scripts (hopefully installed in a previous step).
For example, workflows/Slurm Run Workflow should provide an easy way to send data to Slurm, run the configured and chosen workflow, poll Slurm until jobs are done (or errors) and retrieve the results when the job is done. This workflow script uses some of the other scripts, like
data/Slurm Image Transfer
: to export your selected images / dataset / screen as TIFF files to a Slurm dir.data/Slurm Get Results
: to import your Slurm job results back into Omero as a zip, dataset or attachment.
Other example Omero scripts are:
-
data/Slurm Get Update
: to run while you are waiting on a job to finish on Slurm; it will try to get a%
progress from your job's logfile. Depends on your job/workflow logging a%
of course. -
workflows/Slurm Run Workflow Batched
: This will allow you to run severalworkflows/Slurm Run Workflow
in parallel, by batching your input images into smaller chunks (e.g. turn 64 images into 2 batches of 32 images each). It will then poll all these jobs. -
workflows/Slurm CellPose Segmentation
: This is a more primitive script that only runs the actual workflowCellPose
(if correctly configured). You will need to manually transfer data first (withSlurm Image Transfer
) and manually retrieve data afterward (withSlurm Get Results
).
You can always create your custom scripts.
See the tutorials
I have also provided tutorials on connecting to a Local or Cloud Slurm, and tutorials on how to add your FAIR workflows to this setup. Those can give some more insights as well.
SSH
Note: this library is built for SSH-based connections. If you could, it would be a lot easier to just have the Omero processor
server and the slurm
client server be (on) the same machine: then you can just directly call sbatch
and other slurm
commands from Omero scripts and Slurm would have better access to your data.
This is mainly for those cases where you already have an external HPC cluster and want to connect your Omero instance.
Theoretically, you could extend the SlurmClient
class and change the run
commands to not use SSH, but just a subprocess
.
But then you could also look at other Python libraries like submitit.
SlurmClient class
The SlurmClient class is the main entrypoint in using this library. It is a Python class that extends the Connection class from the Fabric library. It allows connecting to and interacting with a Slurm cluster over SSH.
It includes attributes for specifying paths to directories for Slurm data and Singularity images, as well as specific paths, repositories, and Dockerhub information for different Singularity image models.
The class provides methods for running commands on the remote Slurm host, submitting jobs, checking job status, retrieving job output, and tailing log files.
It also offers a from_config
class method to create a SlurmClient
object by reading configuration parameters from a file. Overall, the class provides a convenient way to work with Slurm clusters and manage job execution and monitoring.
slurm-config.ini
The slurm-config.ini
file is a configuration file used by the omero_slurm_client
Python package to specify various settings related to SSH and Slurm. Here is a brief description of its contents:
[SSH]: This section contains SSH settings, including the alias for the SLURM SSH connection (host). Additional SSH configuration can be specified in the user's SSH config file or in /etc/fabric.yml
.
[SLURM]: This section includes settings specific to Slurm. It defines the paths on the SLURM entrypoint for storing data files (slurm_data_path), container image files (slurm_images_path), and Slurm job scripts (slurm_script_path). It also specifies the repository (slurm_script_repo) from which to pull the Slurm scripts.
[MODELS]: This section is used to define different model settings. Each model has a unique key and requires corresponding values for <key>_repo
(repository containing the descriptor.json file, which will describe parameters and where to find the image), and <key>_job
(jobscript name and location in the slurm_script_repo
). The example shows settings for several segmentation models, including Cellpose, Stardist, CellProfiler, DeepCell, and ImageJ.
The slurm-config.ini
file allows users to configure paths, repositories, and other settings specific to their Slurm cluster and the omero_slurm_client
package, providing flexibility and customization options.
How to add an existing workflow
To add an existing (containerized) workflow, add it to the slurm-config.ini
file like in our example:
# -------------------------------------
# CELLPOSE SEGMENTATION
# -------------------------------------
# The path to store the container on the slurm_images_path
cellpose=cellpose
# The (e.g. github) repository with the descriptor.json file
cellpose_repo=https://github.com/TorecLuik/W_NucleiSegmentation-Cellpose/tree/v1.2.7
# The jobscript in the 'slurm_script_repo'
cellpose_job=jobs/cellpose.sh
Here,
- the name referenced for this workflow is
cellpose
- the location of the container on slurm will be
<slurm_images_path>/cellpose
- the code repository is
https://github.com/TorecLuik/W_NucleiSegmentation-Cellpose
- the specific version we want is
v1.2.7
- the container can be found on bitbucket
- under the path given in the metadata file: descriptor.json
- the location of the jobscript on slurm will be
<slurm_script_repo>/jobs/cellpose.sh
.- This either references a git repo, where it matches this path,
- or it will be the location where the library will generate a jobscript (if no repo is given)
Workflow metadata via descriptor.json
A lot of the automation in this library is based on metadata of the workflow, provided in the source code of the workflow, specifically the descriptor.json.
For example, the Omero script UI can be generated automatically, based on this descriptor. And also, the Slurm job script can be generated automatically, based on this descriptor.
This metadata scheme is (based on) Cytomine / BIAFLOWS, and you can find details of it and how to create one yourself on their website, e.g. this Cytomine dev-guide or this BIAFLOWS dev-guide.
NOTE! We do not require the cytomine_<...>
authentication parameters. They are not mandatory. In fact, we ignore them. But it might be beneficial to make your workflow compatible with Cytomine as well.
Schema
At this point, we are using the cytomine-0.1
schema, in the future we will also want to support other schemas, like Boutiques, commonwl or MLFlow.
We will try to stay compatible with all such schemas (perhaps with less functionality because of missing metadata).
At this point, we do not strictly validate the schema, we just read expected fields from the descriptor.json
.
Multiple versions
Note that while it is possible to have multiple versions of the same workflow on Slurm (and select the desired one in Omero), it is not possible to configure this yet. We assume for now you only want one version to start with. You can always update this config to download a new version to Slurm.
I/O
Unless you change the Slurm
job, the input is expected to be:
- The
infolder
parameter- pointing to a folder with multiple input files/images
- The
gtfolder
parameter (Optional)- pointing to a
ground-truth
input files, generally not needed for prediction / processing purposes.
- pointing to a
- The
outfolder
parameter- where you write all your output files (to get copied back to Omero)
Wrapper.py
Note that you can also use the wrapper.py setup from BIAFLOWS to handle the I/O for you:
with BiaflowsJob.from_cli(argv) as bj:
# Change following to the actual problem class of the workflow
...
# 1. Prepare data for workflow
in_imgs, gt_imgs, in_path, gt_path, out_path, tmp_path = prepare_data(problem_cls, bj, is_2d=True, **bj.flags)
# 2. Run image analysis workflow
bj.job.update(progress=25, statusComment="Launching workflow...")
# Add here the code for running the analysis script
# 3. Upload data to BIAFLOWS
...
# 4. Compute and upload metrics
...
# 5. Pipeline finished
...
This wrapper handles the input parameters for you, providing the input images as in_imgs
, et cetera. Then you add your commandline call between point 2 and 3, and possibly some preprocessing between point 1 and 2:
#add here the code for running the analysis script
For example, from Cellpose container workflow:
...
# 2. Run image analysis workflow
bj.job.update(progress=25, statusComment="Launching workflow...")
# Add here the code for running the analysis script
prob_thresh = bj.parameters.prob_threshold
diameter = bj.parameters.diameter
cp_model = bj.parameters.cp_model
use_gpu = bj.parameters.use_gpu
print(f"Chosen model: {cp_model} | Channel {nuc_channel} | Diameter {diameter} | Cell prob threshold {prob_thresh} | GPU {use_gpu}")
cmd = ["python", "-m", "cellpose", "--dir", tmp_path, "--pretrained_model", f"{cp_model}", "--save_tif", "--no_npy", "--chan", "{:d}".format(nuc_channel), "--diameter", "{:f}".format(diameter), "--cellprob_threshold", "{:f}".format(prob_thresh)]
if use_gpu:
print("Using GPU!")
cmd.append("--use_gpu")
status = subprocess.run(cmd)
if status.returncode != 0:
print("Running Cellpose failed, terminate")
sys.exit(1)
# Crop to original shape
for bimg in in_imgs:
shape = resized.get(bimg.filename, None)
if shape:
img = imageio.imread(os.path.join(tmp_path,bimg.filename_no_extension+"_cp_masks.tif"))
img = img[0:shape[0], 0:shape[1]]
imageio.imwrite(os.path.join(out_path,bimg.filename), img)
else:
shutil.copy(os.path.join(tmp_path,bimg.filename_no_extension+"_cp_masks.tif"), os.path.join(out_path,bimg.filename))
# 3. Upload data to BIAFLOWS
We get the commandline parameters from bj.parameters
(biaflows job) and provide that the cmd
commandline string. Then we run it with subprocess.run(cmd)
and check the status
.
We use a tmp_path
to store both input and output, then move the output to the out_path
after the processing is done.
Also note that some preprocessing is done in step 1:
# Make sure all images have at least 224x224 dimensions
# and that minshape / maxshape * minshape >= 224
# 0 = Grayscale (if input RGB, convert to grayscale)
# 1,2,3 = rgb channel
nuc_channel = bj.parameters.nuc_channel
resized = {}
for bfimg in in_imgs:
...
imageio.imwrite(os.path.join(tmp_path, bfimg.filename), img)
Another example is this imageJ
wrapper:
...
# 3. Call the image analysis workflow using the run script
nj.job.update(progress=25, statusComment="Launching workflow...")
command = "/usr/bin/xvfb-run java -Xmx6000m -cp /fiji/jars/ij.jar ij.ImageJ --headless --console " \
"-macro macro.ijm \"input={}, output={}, radius={}, min_threshold={}\"".format(in_path, out_path, nj.parameters.ij_radius, nj.parameters.ij_min_threshold)
return_code = call(command, shell=True, cwd="/fiji") # waits for the subprocess to return
if return_code != 0:
err_desc = "Failed to execute the ImageJ macro (return code: {})".format(return_code)
nj.job.update(progress=50, statusComment=err_desc)
raise ValueError(err_desc)
Once again, just a commandline --headless
call to ImageJ
, wrapped in this Python script and this container.
How to add your new custom workflow
Building workflows like this will make them more FAIR (also for software) and uses best practices like code versioning and containerization!
Also take a look at our in-depth tutorial on adding a Cellprofiler pipeline as a workflow to Omero Slurm Client.
Here is a shorter version: Say you have a script in Python and you want to make it available on Omero and Slurm.
These are the steps required:
- Rewrite your script to be headless / to be executable on the commandline. This requires handling of commandline parameters as input.
- Make sure the I/O matches the Slurm job, see previous chapter.
- Describe these commandline parameters in a
descriptor.json
(see previous chapter). E.g. like this. - Describe the requirements / environment of your script in a
requirements.txt
, like this. Make sure to pin your versions for future reproducability! - Package your script in a Docker container. E.g. like this.
- Note: Please watch out for the pitfalls of reproducability with Dockerfiles: Always version your packages!.
- Publish your source code, Dockerfile and descriptor.json to a new Github repository (free for public repositories). You can generate a new repository from template, using this template provided by Neubias (BIAFLOWS). Then replace the input of the files with yours.
- (Recommended) Publish a new version of your code (e.g. v1.0.0). E.g. like this.
- Publish your container on Dockerhub (free for public repositories), using the same versioning as your source code. Like this from Windows Docker or like this from a commandline.
- (Recommended) Please use a tag that equals your repository version, instead of
latest
. This improves reproducability! - (Optional) this library grabs
latest
if the code repository is given no version, but themaster
branch.
- (Recommended) Please use a tag that equals your repository version, instead of
- Follow the steps from the previous chapter:
- Add details to
slurm-config.ini
- Run
SlurmClient.from_config(init_slurm=True)
- Add details to
Slurm jobs
Generating jobs
By default, omero_slurm_client
will generate basic slurm jobs for each workflow, based on the metadata provided in descriptor.json
and a job template.
It will replace $PARAMS
with the (non-cytomine_
) parameters given in descriptor.json
. See also the Parameters section below.
How to add your own Slurm job
You could change the job template and generate new jobs, by running SlurmClient.from_config(init_slurm=True)
(or slurmClient.update_slurm_scripts(generate_jobs=True)
)
Or you could add your jobs to a Github repository and reference this in slurm-config.ini
, both in the field slurm_script_repo
and every <workflow>_job
:
# -------------------------------------
# REPOSITORIES
# -------------------------------------
# A (github) repository to pull the slurm scripts from.
#
# Note:
# If you provide no repository, we will generate scripts instead!
# Based on the job_template and the descriptor.json
#
slurm_script_repo=https://github.com/TorecLuik/slurm-scripts
[MODELS]
# -------------------------------------
# Model settings
# -------------------------------------
# ...
# -------------------------------------
# CELLPOSE SEGMENTATION
# -------------------------------------
# The path to store the container on the slurm_images_path
cellpose=cellpose
# The (e.g. github) repository with the descriptor.json file
cellpose_repo=https://github.com/TorecLuik/W_NucleiSegmentation-Cellpose/tree/v1.2.7
# The jobscript in the 'slurm_script_repo'
cellpose_job=jobs/cellpose.sh
You can update the jobs by calling slurmClient.update_slurm_scripts()
, which will pull the repository('s default branch).
This might be useful, for example if you have other hardware requirements for your workflow(s) than the default job asks for, or if you want to run more than just 1 singularity container.
Parameters
The library will provide the parameters from your descriptor.json
as environment variables to the call. E.g. set DIAMETER=0; sbatch ...
.
Other environment variables provided are:
DATA_PATH
- Made of
<slurm_data_path>/<input_folder>
. The base dir for data folders for this execution. We expect it to contain/data/in
,/data/in
and/data/in
folders in our template and data transfer setup.
- Made of
IMAGE_PATH
- Made of
<slurm_images_path>/<model_path>
, as described inslurm-config.ini
- Made of
IMAGE_VERSION
SINGULARITY_IMAGE
- Already uses the
IMAGE_VERSION
above, as<container_name>_<IMAGE_VERSION>.sif
- Already uses the
We (potentially) override the following Slurm job settings programmatically:
--mail-user={email}
(optional)--time={time}
(optional)--output=omero-%4j.log
(mandatory)
We could add more overrides in the future, and perhaps make them available as global configuration variables in slurm-config.ini
.
Batching
We can simply use Slurm
for running your workflow 1:1, so 1 job to 1 workflow. This could speed up your workflow already, as Slurm
servers are likely equipped with strong CPU and GPU.
However, Slurm
is also built for parallel processing on multiple (or the same) servers. We can accomplish this by running multiple jobs for 1 workflow. This is simple for embarrassingly parallel tasks, like segmenting multiple images: just provide each job with a different set of input images. If you have 100 images, you could run 10 jobs on 10 images and (given enough resources available for you on Slurm) that could be 10x faster. In theory, you could run 1 job per image, but at some point you run into the overhead cost of Slurm (and Omero) and it might actually slow down again (as you incur this cost a 100 times instead of 10 times).
Using the GPU on Slurm
Note, the default Slurm job script will not request any GPU resources.
This is because GPU resources are expensive and some programs do not work with GPU.
We can instead enable the use of GPU by either providing our own Slurm job scripts, or setting an override value in slurm-config.ini
:
# -------------------------------------
# CELLPOSE SEGMENTATION
# -------------------------------------
# The path to store the container on the slurm_images_path
cellpose=cellpose
# The (e.g. github) repository with the descriptor.json file
cellpose_repo=https://github.com/TorecLuik/W_NucleiSegmentation-Cellpose/tree/v1.2.7
# The jobscript in the 'slurm_script_repo'
cellpose_job=jobs/cellpose.sh
# Override the default job values for this workflow
# Or add a job value to this workflow
# If you don't want to override, comment out / delete the line.
# Run CellPose Slurm with 10 GB GPU
cellpose_job_gres=gpu:1g.10gb:1
In fact, any ..._job_...=...
configuration value will be forwarded to the Slurm commandline.
Slurm commandline parameters override those in the script, so the above one requests 1 10GB gpu for Cellpose.
E.g. you could also set the time limit higher:
# -------------------------------------
# CELLPOSE SEGMENTATION
# -------------------------------------
# The path to store the container on the slurm_images_path
cellpose=cellpose
# The (e.g. github) repository with the descriptor.json file
cellpose_repo=https://github.com/TorecLuik/W_NucleiSegmentation-Cellpose/tree/v1.2.7
# The jobscript in the 'slurm_script_repo'
cellpose_job=jobs/cellpose.sh
# Override the default job values for this workflow
# Or add a job value to this workflow
# If you don't want to override, comment out / delete the line.
# Run with longer time limit
cellpose_job_time=00:30:00
Now the CellPose job should run for maximum of 30 minutes, instead of the default.
Transfering data
We have added methods to this library to help with transferring data to the Slurm
cluster, using the same SSH connection (via SCP or SFTP).
slurmClient.transfer_data(...)
- Transfer data to the Slurm cluster
slurmClient.unpack_data(...)
- Unpack zip file on the Slurm cluster
slurmClient.zip_data_on_slurm_server(...)
- Zip data on the Slurm cluster
slurmClient.copy_zip_locally(...)
- Transfer (zip) data from the Slurm cluster
slurmClient.get_logfile_from_slurm(...)
- Transfer logfile from the Slurm cluster
And more; see the docstring of SlurmClient
and example Omero scripts.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file omero_slurm_client-0.2.0a0.tar.gz
.
File metadata
- Download URL: omero_slurm_client-0.2.0a0.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fa2962e7b0d3dbcac42a2d3a0ee271c48883ea943c98182277b60c3ff7b963b7 |
|
MD5 | 3e22b71307258dce2ec550f95e7854ca |
|
BLAKE2b-256 | 62ef7d7efe28f914436a60d6503097b3b4c6713d1786aeebb410204fd1356a21 |
File details
Details for the file omero_slurm_client-0.2.0a0-py3-none-any.whl
.
File metadata
- Download URL: omero_slurm_client-0.2.0a0-py3-none-any.whl
- Upload date:
- Size: 1.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a58301fa96cd3a141fe7c13e66f5dcfd48db97a1222857e17e12c69e0db2f918 |
|
MD5 | 10037edcdd3a955cdfa913eafaf4b6dc |
|
BLAKE2b-256 | 7ba965765de30000275486dbf1ca253a54992179689e55c7f663b7d7bb37d36d |