Skip to main content

No project description provided

Project description

SSEC-JHU dplutils

CI Status Documentation Status codecov Security Status

SSEC-JHU Logo

Distributed Data Pipeline Utilities

Usage:

Setup

Get (or build, see below) docker image

docker pull {TBD}

Start cluster

To start a cluster, start one ray head node and any number of worker nodes on network-connected hosts. To start the head node, running the container using the docker engine, this can be used:

docker run -d -n rayhead -v /path/to/data:/data --net host \
  dplutils /opt/startray.sh --head --block

which will start the head node (blocking in order that the container stay up). The --net host option is given to expose all open ports on the host as ray requires several bi-directional connections to workers. The -v ... option is an example of mounted a local path into the container so it can access files (for example directory containing source data, and output directory). It also exposes a dashboard at 8265 that can be viewed using a web browser.

On the workers, similarly start using the command:

docker run -d -v /path/to/data:/data --net host \
  dplutils /opt/startray.sh --block --address={head-node}:6379

For hosts with custome resources (e.g. other than those that get auto-detected such as CPUs and GPUs), you can pass resources to the start command:

docker run -d --net host -v /path/to/data:/data \
  dplutils /opt/startray.sh --block --address={head-node}:6379 \
  --resources '{"mycustomresource": 1}'

In the dashboard you should see the workers listed in the clusters tab.

Start pipeline

Pipelines can be run via interactive python sessions or asynchronously. In an interactive session one would import or define a pipeline within the session and then call run or writeto method to kick off execution. For longer running or production jobs it is generally advisable to submit a job to ray. dplutils contains helpers for making it easy to run configurable pipelines via the command line. For example, assuming a script like:

from dplutils.pipeline import PipelineTask
from dplutils.pipeline.ray import RayDataPipelineExecutor
from dplutils.cli import cli_run

if __name__ == '__main__':
   pl = RayDataPipelineExecutor([PipelineTask('task1', lambda x: x.assign(newcol=1))])
   cli_run(pl)

We can submit the job in the following way, assuming a container is already running and has had the ray head node started (here named rayhead; see above):

docker exec -it rayhead ray job submit -- python /path/to/script.py -o outdir

Note that as this is run within the container environment, the paths are what is exposed within and not necessarily the same as in the host environment

The progress and log files can be viewed on the ray dashboard, and generated data will be available as a parquet table written to /outdir (one file per batch, as they are completed)

Installation, Build, & Run instructions

An "official" docker image is provided based on the latest release, but for development or those needing a custom build or running outside of a containerized environment, below are instructions for installing the code from the source repository.

Setup

Install dependencies:

  • pip install -r requirements/dev.txt

Tests

Run tox:

  • tox -e test, to run just the tests
  • tox, to run linting, tests and build. This should be run without errors prior to commit

Docker

From the repo directory, run

docker build -f docker/Dockerfile --tag dplutils .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dplutils-0.1.2.tar.gz (138.8 kB view hashes)

Uploaded Source

Built Distribution

dplutils-0.1.2-py3-none-any.whl (16.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page