A Snakemake executor plugin for submitting jobs to a LSF cluster.
Project description
Snakemake executor plugin: LSF
LSF is common high performance computing batch system.
Specifying Project and Queue
LSF clusters can have mandatory resource indicators for
accounting and scheduling, [Project]{.title-ref} and
[Queue]{.title-ref}, respectivily. These resources are usually
omitted from Snakemake workflows in order to keep the workflow
definition independent from the platform. However, it is also possible
to specify them inside of the workflow as resources in the rule
definition (see snakefiles-resources
{.interpreted-text role="ref"}).
To specify them at the command line, define them as default resources:
$ snakemake --executor lsf --default-resources lsf_project=<your LSF project> lsf_queue=<your LSF queue>
If individual rules require e.g. a different queue, you can override the default per rule:
$ snakemake --executor lsf --default-resources lsf_project=<your LSF project> lsf_queue=<your LSF queue> --set-resources <somerule>:lsf_queue=<some other queue>
Usually, it is advisable to persist such settings via a configuration profile, which can be provided system-wide, per user, and in addition per workflow.
Ordinary SMP jobs
Most jobs will be carried out by programs which are either single core
scripts or threaded programs, hence SMP (shared memory
programs) in nature. Any
given threads and mem_mb
requirements will be passed to LSF:
rule a:
input: ...
output: ...
threads: 8
resources:
mem_mb=14000
This will give jobs from this rule 14GB of memory and 8 CPU cores. It is advisable to use resonable default resources, such that you don't need to specify them for every rule. Snakemake already has reasonable defaults built in, which are automatically activated when using any non-local executor (hence also with lsf). Use mem_mb_per_cpu to give the standard LSF type memory per CPU
MPI jobs {#cluster-slurm-mpi}
Snakemake's LSF backend also supports MPI jobs, see
snakefiles-mpi
{.interpreted-text role="ref"} for details.
rule calc_pi:
output:
"pi.calc",
log:
"logs/calc_pi.log",
threads: 40
resources:
tasks=10,
mpi='mpirun,
shell:
"{resources.mpi} -np {resources.tasks} calc-pi-mpi > {output} 2> {log}"
$ snakemake --set-resources calc_pi:mpi="mpiexec" ...
Advanced Resource Specifications
A workflow rule may support a number of resource specifications. For a LSF cluster, a mapping between Snakemake and SLURM needs to be performed.
You can use the following specifications:
LSF | Snakemake | Description |
---|---|---|
-q |
lsf_queue |
the queue a rule/job is to use |
--W |
walltime |
the walltime per job in minutes |
--constraint |
constraint |
may hold features on some clusters |
-R "rusage[mem=<memory_amount>]" |
mem , mem_mb |
memory a cluster node must |
provide (mem : string with unit), mem_mb : i |
||
-R "rusage[mem=<memory_amount>]" |
mem_mb_per_cpu |
memory per reserved CPU |
Each of these can be part of a rule, e.g.:
rule:
input: ...
output: ...
resources:
partition: <partition name>
walltime: <some number>
walltime
and runtime
are synonyms.
Please note: as --mem
and --mem-per-cpu
are mutually exclusive,
their corresponding resource flags mem
/mem_mb
and
mem_mb_per_cpu
are mutually exclusive, too. You can only reserve
memory a compute node has to provide or the memory required per CPU
(LSF does not make any distintion between real CPU cores and those
provided by hyperthreads). The executor will convert the provided options
based on cluster config.
Additional custom job configuration
There are various bsub
options not directly supported via the resource
definitions shown above. You may use the lsf_extra
resource to specify
additional flags to bsub
:
rule myrule:
input: ...
output: ...
resources:
lsf_extra="-R a100 -gpu num=2"
Again, rather use a profile to specify such resources.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file snakemake_executor_plugin_lsf-0.2.1.tar.gz
.
File metadata
- Download URL: snakemake_executor_plugin_lsf-0.2.1.tar.gz
- Upload date:
- Size: 12.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.10.6 Darwin/22.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2e03b98b29cf1469add3114046b5a1662e07879a8606914f4fe4c01f1fd53aea |
|
MD5 | f6e28aec03738ee9f0eb1fcabdd7938a |
|
BLAKE2b-256 | 56b001f3ebbeda819ae2afc6845f0b5c5564aaff449496864857f1faa800c0a7 |
File details
Details for the file snakemake_executor_plugin_lsf-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: snakemake_executor_plugin_lsf-0.2.1-py3-none-any.whl
- Upload date:
- Size: 11.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.10.6 Darwin/22.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4d1136b9f92a5cd9db63f62e80b6205b6b7950dff191fd200d65163ddd6e0889 |
|
MD5 | e82517a89ba6f7d416a89aada9bb5fdd |
|
BLAKE2b-256 | d2242530091db49b39b006b237d972b64b4266ff5a81397a421eafda648f3e2d |