Skip to main content

Abstraction layer that is used by the DKIST Science Data Processing pipelines to process DKIST data using Apache Airflow.

Project description

Overview

The dkist-processing-core package provides an abstraction layer between the dkist data processing code, the workflow engine that supports it (Airflow), and the logging infrastructure. By providing the abstraction layer to Airflow specifically a versioning system is implemented.

Core, Common, and Instrument Brick Diagram

There are 4 main entities which implement the abstraction which are described below.

Task : The Task defines the interface used by a processing pipeline for a step in a workflow. By conforming to this interface (i.e. subclassing) the processing pipelines can remain agnostic of how the tasks will ultimately be run. The Task additionally implements some methods that should be global for all dkist processing tasks based on the infrastructure it will run on (e.g. application performance monitoring infrastructure).

Node : The job of the Node is to translate a Task into code that can instantiate that task. Instantiations of a Task can vary depending on the target environment e.g. a virtual environment with a BashOperator for Airflow vs. straight python for a notebook.

Workflow : The Workflow defines the interface used by the processing pipeline to chain tasks together in a directed graph. The Workflow transforms this graph into the workflow engine format by providing any wrapping boilerplate, task ordering, and selecting the appropriate Node instantiation.

Build Utils : The Build Utils are the capstone layer which aims to ease the transformation process for multiple workflows at a time during a processing pipeline’s build process.

Usage

The Workflow and Task are the primary objects used by client libraries. The Task is used as a base class and the subclass must at a minimum implement run. A Workflow is used to give the tasks an order of execution and a name for the flow.

from dkist_processing_core import TaskBase
from dkist_processing_core import Workflow

# Task definitions
class MyTask1(TaskBase):
    def run(self):
        print("Running MyTask1")


class MyTask2(TaskBase):
    def run(self):
        print("Running MyTask2")

# Workflow definition
# MyTask1 -> MyTask2
w = Workflow(process_category="My", process_name="Workflow", workflow_package=__package__, workflow_version="dev")
w.add_node(MyTask1, upstreams=None)
w.add_node(MyTask2, upstreams=MyTask1)

Using dkist-processing-core for data processing with Airflow involves a project structure and build process that results in code artifacts deployed to PyPI and a zip of workflow artifacts deployed to artifactory.

Build Artifacts Diagram

The client dkist data processing libraries should implement a structure and build pipeline using dkist-processing-test as an example. The build pipelines for a client repo can leverage the build_utils for test and export.

Specifically for Airflow, the resulting deployment has the versioned workflow artifacts all available to the scheduler and the versioned code artifacts available to workers for task execution

Airflow Deployment Diagram

Build

dkist-processing-core is built using bitbucket-pipelines

Deployment

dkist-processing-core is deployed to PyPI

Environment Variables

Variable

Description

Type

Default

BUILD_VERSION

Build/Export pipelines only. This is the value that will be appended to all artifacts and represents their unique version

STR

dev

MESH_CONFIG

Provides the dkistdc cloud mesh configuration. Specifically the location of the message broker

JSON

{}

ISB_USERNAME

Message broker user name

STR

guest

ISB_PASSWORD

Message broker password

STR

guest

ISB_EXCHANGE

Message Broker Exchange name for publishing messages

STR

master.direct.x

ISB_QUEUE_TYPE

Message Broker queue type for transporting messages

STR

classic

ELASTIC_APM_SERVICE_NAME

Service Name used by Elastic Application Performance Monitoring

STR

ELASTIC_APM_OTHER_OPTIONS

Dictionary of configuration for the Elastic Application Performance Monitoring client

STR

{}

ELASTIC_APM_ENABLED

Flag to disable/enable Elastic Application Performance Monitoring client calls which are chatty if not connected to an APM server.

BOOL

FALSE

BUILD_VERSION

Version of the pipeline. When built this makes its way into the workflow or dag name.

STR

dev

Development

A prerequisite for test execution is a running instance of rabbitmq and docker on the local machine. For RabbitMQ the tests will use the default guest/guest credentials and a host ip of 127.0.0.1 and port of 5672 to connect to the broker. Getting docker set up varies by system, but the tests will use the default unix socket for the docker daemon.

To run the tests locally, clone the repository and install the package in editable mode with the test extras.

git clone git@bitbucket.org:dkistdc/dkist-processing-core.git
cd dkist-processing-core
pre-commit install
pip install -e .[test]
# RabbitMQ and Docker needs to be running
pytest -v --cov dkist_processing_core

Changelog

When you make any change to this repository it MUST be accompanied by a changelog file. The changelog for this repository uses the towncrier package. Entries in the changelog for the next release are added as individual files (one per change) to the changelog/ directory.

Writing a Changelog Entry

A changelog entry accompanying a change should be added to the changelog/ directory. The name of a file in this directory follows a specific template:

<PULL REQUEST NUMBER>.<TYPE>[.<COUNTER>].rst

The fields have the following meanings:

  • <PULL REQUEST NUMBER>: This is the number of the pull request, so people can jump from the changelog entry to the diff on BitBucket.

  • <TYPE>: This is the type of the change and must be one of the values described below.

  • <COUNTER>: This is an optional field, if you make more than one change of the same type you can append a counter to the subsequent changes, i.e. 100.bugfix.rst and 100.bugfix.1.rst for two bugfix changes in the same PR.

The list of possible types is defined the the towncrier section of pyproject.toml, the types are:

  • feature: This change is a new code feature.

  • bugfix: This is a change which fixes a bug.

  • doc: A documentation change.

  • removal: A deprecation or removal of public API.

  • misc: Any small change which doesn’t fit anywhere else, such as a change to the package infrastructure.

Rendering the Changelog at Release Time

When you are about to tag a release first you must run towncrier to render the changelog. The steps for this are as follows:

  • Run towncrier build –version vx.y.z using the version number you want to tag.

  • Agree to have towncrier remove the fragments.

  • Add and commit your changes.

  • Tag the release.

NOTE: If you forget to add a Changelog entry to a tagged release (either manually or automatically with towncrier) then the Bitbucket pipeline will fail. To be able to use the same tag you must delete it locally and on the remote branch:

# First, actually update the CHANGELOG and commit the update
git commit

# Delete tags
git tag -d vWHATEVER.THE.VERSION
git push --delete origin vWHATEVER.THE.VERSION

# Re-tag with the same version
git tag vWHATEVER.THE.VERSION
git push --tags origin main

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dkist-processing-core-4.2.1.tar.gz (387.9 kB view details)

Uploaded Source

File details

Details for the file dkist-processing-core-4.2.1.tar.gz.

File metadata

  • Download URL: dkist-processing-core-4.2.1.tar.gz
  • Upload date:
  • Size: 387.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.10

File hashes

Hashes for dkist-processing-core-4.2.1.tar.gz
Algorithm Hash digest
SHA256 fade4ee8345f47d0ff6086ad0ae5f3a9c470e1435b8f2a16642322c8feeec844
MD5 57fa5636f2c26e824394bb30453b10bb
BLAKE2b-256 b7995453b6aaa0e13e76101c120dcda4dec6afadbba80ab5d76da4f7702049d2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page