Skip to main content

Cadasta Worker Toolbox

Project description

PyPI version Build Status Requirements Status

A collection of helpers to assist in quickly building asynchronous workers for the Cadasta system.

Architecture

Async System Architecture Diagram

Async System Architecture Diagram

The Cadasta asynchronous system is designed so that both the scheduled tasks and the task results can be tracked by the central Cadasta Platform. To ensure that this takes place, all Celery workers must be correctly configured to support these features.

Tracking Scheduled Tasks

To keep our system aware of all tasks being scheduled, the Cadasta Platform has a process running to consume task messages off of a task-monitor queue and insert those messages into our database. To support this design, all task producers (including worker nodes) must publish their task messages to both the normal destination queues and the task-monitor queue. This is acheived by registering all queues with a Topic Exchange, setting the task-monitor queue to subscribe to all messages sent to the exchange, and setting standard work queues to subscribe to messages with a matching routing_key. Being that the Cadasta Platform is designed to work with Amazon SQS and the SQS backend only keeps exchange/queue declarations in memory, each message producer must have this set up within their configuration.

Tracking Task Results

Tasks results are inserted by each worker into the Platform DB. For this reason, it is important that each worker have network access to the Platform DB (via AWS Security Groups). Additionally, each worker should have a provided username and password that grants them authorization to write to the Platform DB’s Result Table. For reasons of security, it is advised that these credentials be permitted to only access this single table. The Result Table has a one-to-one relation via the task_id column to the Task Table. This should not be enforced via a constraint, as it is possible for a task’s result to be entered into the DB before the sync-tasks service enters the task into the Task Table.

Library

cadasta.workertoolbox.conf.Config

The Config class was built to simplify configuring Celery settings, helping to ensure that all workers adhere to the architecture requirements of the Cadasta asynchronous system. It essentially offers a diff between Celery’s default configuration and the configuration required by our system. It is the aim of the class to not require much customization on the part of the developer, however some customization may be needed when altering configuration between environments (e.g. if dev settings vary greatly from prod settings).

Any Celery setting may be submitted. It is internal convention that we use the Celery’s newer lowercase settings rather than their older uppercase counterparts. This will ensure that they are displayed when calling repr on the Conf instance.

Once applied, all settings (and internal variables) are available on the Celery app instance’s app.conf object.

Provided Configuration

result_backend

Defaults to 'db+postgresql://{0.RESULT_DB_USER}:{0.RESULT_DB_PASS}@{0.RESULT_DB_HOST}/{0.RESULT_DB_NAME}' rendered with self.

broker_transport

Defaults to 'sqs’.

broker_transport_options

Defaults to:

{
    'region': 'us-west-2',
    'queue_name_prefix': '{}-'.format(QUEUE_NAME_PREFIX)
}
task_queues

Defaults to the following set of kombu.Queue objects, where queues is the configuration’s internal QUEUES variable and exchange is a kombu.Exchange object constructed from the task_default_exchange and task_default_exchange_type settings:

set([
    Queue('celery', exchange, routing_key='celery'),
    Queue(platform_queue, exchange, routing_key='#'),
] + [
    Queue(q_name, exchange, routing_key=q_name)
    for q_name in queues
])

Note: It is recommended that developers not alter this setting.

task_routes

Defaults to a function that will generate a dict with the routing_key matching the value at the first index of a task name split on the . and the exchange set to a kombu.Exchange object constructed from the task_default_exchange and task_default_exchange_type settings

Note: It is recommended that developers not alter this setting.

task_default_exchange

Defaults to 'task_exchange'

task_default_exchange_type

Defaults to 'topic'

task_track_started

Defaults to True.

Internal Variables

By convention, all variables used to construct Celery configuration should should be written entirely uppercase.

QUEUES

This should contain an array of names for all service-related queues used by the Cadasta Platform. These values are used to construct the task_queues configuration. For the purposes of routing followup tasks, it’s important that every task consumer is aware of all queues available. For this reason, if a queue is used by any service worker then it should be specified within this array. It is not necessary to include the 'celery' or 'platform.fifo' queues. Defaults to the contents of the DEFAULT_QUEUES variable in the modules `__init__.py file </cadasta/workertoolbox/__init__.py>`__.

PLATFORM_QUEUE_NAME

Defaults to 'platform.fifo'.

Note: It is recommended that developers not alter this setting.

QUEUE_NAME_PREFIX

Used to populate the queue_name_prefix value of the connections broker_transport_options. Defaults to value of QUEUE_PREFIX environment variable if populated, 'dev' if not.

RESULT_DB_USER

Used to populate the default result_backend template. Defaults to RESULT_DB_USER environment variable if populated, 'cadasta' if not.

RESULT_DB_PASS

Used to populate the default result_backend template. Defaults to RESULT_DB_PASS environment variable if populated, 'cadasta' if not.

RESULT_DB_HOST

Used to populate the default result_backend template. Defaults to RESULT_DB_HOST environment variable if populated, 'localhost' if not.

RESULT_DB_PORT

Used to populate the default result_backend template. Defaults to RESULT_DB_PORT environment variable if populated, 'cadasta' if not.

RESULT_DB_NAME

Used to populate the default result_backend template. Defaults to RESULT_DB_PORT environment variable if populated, '5432' if not.

cadasta.workertoolbox.tests.build_functional_tests

When provided with a Celery app instance, this function generates a suite of functional tests to ensure that the provided application’s configuration and functionality conforms with the architecture of the Cadasta asynchronous system.

An example, where an instanciated and configured Celery() app instance exists in a parallel celery module:

from cadasta.workertoolbox.tests import build_functional_tests

from .celery import app

FunctionalTests = build_functional_tests(app)

To run these tests, use your standard test runner (e.g. pytest) or call manually from the command-line:

python -m unittest path/to/tests.py

Development

Testing

pip install -r requirements-test.txt
./runtests

Deploying

pip install -r requirements-deploy.txt
python setup.py test clean build publish tag

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cadasta-workertoolbox-0.1.10.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

cadasta_workertoolbox-0.1.10-py2.py3-none-any.whl (13.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file cadasta-workertoolbox-0.1.10.tar.gz.

File metadata

File hashes

Hashes for cadasta-workertoolbox-0.1.10.tar.gz
Algorithm Hash digest
SHA256 53b66a8ba1eccfa6225ce6bcb4f935fa282fc4c16ea2bde70a0610f368ade470
MD5 76577bd37188e2c6e0fe8170f4bfa594
BLAKE2b-256 782a8c27fc76f172c30eaa13ffc7a71443a70c727e05be67009fa4f794bdaee0

See more details on using hashes here.

Provenance

File details

Details for the file cadasta_workertoolbox-0.1.10-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for cadasta_workertoolbox-0.1.10-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 8ffd4742b637eec03068cd1cd05cb0fcba51efc7c921a4fbc88df582b1a0ffb5
MD5 2682155270a444dd4c5c0f2782eeb68f
BLAKE2b-256 efc60061e9cd52e8519d8be59c8a74aea77ae3a6579a5e18487e35c0c8abbaf9

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page