Analysis runner to help make analysis results reproducible
Project description
Analysis runner
This tool helps to make analysis results reproducible, by automating the following aspects:
- Allow quick iteration using an environment that resembles production.
- Only allow access to production datasets through code that has been reviewed.
- Link the output data with the exact program invocation of how the data has been generated.
One of our main workflow pipeline systems at the CPG is Hail Batch. By default, its pipelines are defined by running a Python program locally. This tool instead lets you run the "driver" on Hail Batch itself.
Furthermore, all invocations are logged together with the output data, as well as Airtable and the sample-metadata server.
When using the analysis-runner, the batch jobs are not run under your standard
Hail Batch service account user
(<USERNAME>-trial
). Instead, a separate Hail Batch account is
used to run the batch jobs on your behalf. There's a dedicated Batch service
account for each dataset (e.g. "tob-wgs", "fewgenomes") and access level
("test", "standard", or "full", as documented in the team docs
storage policies),
which helps with bucket permission management and billing budgets.
Note that you can use the analysis-runner to start arbitrary jobs, e.g. R scripts. They're just launched in the Hail Batch environment, but you can use any Docker image you like.
The analysis-runner is also integrated with our Cromwell server to run WDL based workflows.
CLI
The analysis-runner CLI can be used to start pipelines based on a GitHub repository, commit, and command to run.
First, make sure that your environment provides Python 3.10 or newer:
> python3 --version
Python 3.10.7
If the installed version is too old, on a Mac you can use brew
to update. E.g.:
brew install python@3.10
Then install the analysis-runner
Python package using pip
:
python3 -m pip install analysis-runner
Run analysis-runner --help
to see usage information.
Make sure that you're logged into GCP:
gcloud auth application-default login
If you're in the directory of the project you want to run, you can omit the
--commit
and --repository
parameters, which will use your current git remote and
commit HEAD.
For example:
analysis-runner \
--dataset <dataset> \
--description <description> \
--access-level <level> \
--output-dir <directory-within-bucket> \
script_to_run.py with arguments
<level>
corresponds to an access level as defined in the storage policies.
<directory-within-bucket>
does not contain a prefix like gs://cpg-fewgenomes-main/
. For example, if you want your results to be stored in gs://cpg-fewgenomes-main/1kg_pca/v2
, specify --output-dir 1kg_pca/v2
.
If you provide a --repository
, you MUST supply a --commit <SHA>
, e.g.:
analysis-runner \
--repository my-approved-repo \
--commit <commit-sha> \
--dataset <dataset> \
--description <description> \
--access-level <level>
--output-dir <directory-within-bucket> \
script_to_run.py with arguments
For more examples (including for running an R script and dataproc), see the examples directory.
Custom Docker images
The default driver image that's used to run scripts comes with Hail and some statistics libraries preinstalled (see the corresponding Hail Dockerfile). It's possible to use any custom Docker image instead, using the --image
parameter. Note that any such image needs to contain the critical dependencies as specified in the base
image.
For R scripts, we add the R-tidyverse set of packages to the base image, see the corresponding R Dockerfile and the R example for more details.
Helper for Hail Batch
The analysis-runner package has a number of functions that make it easier to run reproducible analysis through Hail Batch.
This is installed in the analysis runner driver image, ie: you can access the analysis_runner module when running scripts through the analysis-runner.
Checking out a git repository at the current commit
import hailtop.batch as hb
from cpg_utils.git import (
prepare_git_job,
get_repo_name_from_current_directory,
get_git_commit_ref_of_current_repository,
)
b = hb.Batch('do-some-analysis')
j = b.new_job('checkout_repo')
prepare_git_job(
job=j,
organisation='populationgenomics',
# you could specify a name here, like 'analysis-runner'
repo_name=get_repo_name_from_current_directory(),
# you could specify the specific commit here, eg: '1be7bb44de6182d834d9bbac6036b841f459a11a'
commit=get_git_commit_ref_of_current_repository(),
)
# Now, the working directory of j is the checkout out repository
j.command('examples/bash/hello.sh')
Running a dataproc script
import hailtop.batch as hb
from analysis_runner.dataproc import setup_dataproc
b = hb.Batch('do-some-analysis')
# starts up a cluster, and submits a script to the cluster,
# see the definition for more information about how you can configure the cluster
# https://github.com/populationgenomics/analysis-runner/blob/main/analysis_runner/dataproc.py#L80
cluster = dataproc.setup_dataproc(
b,
max_age='1h',
packages=['click', 'selenium'],
init=['gs://cpg-reference/hail_dataproc/install_common.sh'],
cluster_name='My Cluster with max-age=1h',
)
cluster.add_job('examples/dataproc/query.py', job_name='example')
Development
You can ignore this section if you just want to run the tool.
To set up a development environment for the analysis runner using pip, run the following:
pip install -r requirements-dev.txt
pre-commit install --install-hooks
pip install --editable .
Deployment
- Add a Hail Batch service account for all supported datasets.
- Copy the Hail tokens to the Secret Manager.
- Deploy the server by invoking the
deploy_server
workflow manually. - Deploy the Airtable publisher.
- Publish the CLI tool and library to PyPI.
The CLI tool is shipped as a pip package. To build a new version, we use bump2version. For example, to increment the patch section of the version tag 1.0.0 and make it 1.0.1, run:
git checkout -b add-new-version
bump2version patch
git push --set-upstream origin add-new-version
# Open pull request
open "https://github.com/populationgenomics/analysis-runner/pull/new/add-new-version"
It's important the pull request name start with "Bump version:" (which should happen
by default). Once this is merged into main
, a GitHub action workflow will build a
new package that will be uploaded to PyPI, and become available to install with pip install
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file analysis-runner-ms-0.9.7.tar.gz
.
File metadata
- Download URL: analysis-runner-ms-0.9.7.tar.gz
- Upload date:
- Size: 29.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bea884e80d24e6b023780760cac159c8d11b2fe1a130a4a7571aee3ef3ab4466 |
|
MD5 | e6e79452ab95d7191be168fccf4d352b |
|
BLAKE2b-256 | 089a7f2b250b7673d16ea6f780b96207ff565c836a687f87329271ee165a80be |