Skip to main content

Cadasta Worker Toolbox

Project description

PyPI version Build Status Requirements Status

A collection of helpers to assist in quickly building asynchronous workers for the Cadasta system.

Library

cadasta.workertoolbox.conf.Config

The Config class was built to simplify configuring Celery settings, helping to ensure that all workers adhere to the architecture requirements of the Cadasta asynchronous system. It essentially offers a diff between Celery’s default configuration and the configuration required by our system. It is the aim of the class to not require much customization on the part of the developer, however some customization may be needed when altering configuration between environments (e.g. if dev settings vary greatly from prod settings).

Any Celery setting may be submitted via keyword argument or via environment variable. Arguments submitted via keyword argument are expected to comply with Celery’s newer lowercase settings rather than their older uppercase counterparts. Arguments provided by environment variable should be uppercase and be prepended with the prefix CELERY_ (e.g. to set the task_track_started value, an environment variable of CELERY_TASK_TRACK_STARTED should be set). The prefix can be customized with a provided ENV_PREFIX keyword argument or CELERY_ENV_PREFIX environment variable. If both a keyword argument and environment variable are provided for a setting, the keyword argument takes precedence. Settings with non-string defaults will have the environment variable values run through `ast.literal_eval <https://docs.python.org/3/library/ast.html#ast.literal_eval>`__, supporting Python native types like bool or tuple. Only lowercase settings are shown when calling repr on the Conf instance.

Once applied, all settings (and internal variables) are available on the Celery app instance’s app.conf object.

Provided Configuration

Below is the configuration that the Config class will provide to a Celery instance.

result_backend

Defaults to 'db+postgresql://{0.RESULT_DB_USER}:{0.RESULT_DB_PASS}@{0.RESULT_DB_HOST}/{0.RESULT_DB_NAME}' rendered with self.

broker_transport

Defaults to 'sqs’.

broker_transport_options

Defaults to:

{
    'region': 'us-west-2',
    'queue_name_prefix': '{}-'.format(QUEUE_NAME_PREFIX)
}
task_queues

Defaults to the following set of kombu.Queue objects, where queues is the configuration’s internal QUEUES variable and exchange is a kombu.Exchange object constructed from the task_default_exchange and task_default_exchange_type settings:

set([
    Queue('celery', exchange, routing_key='celery'),
    Queue(platform_queue, exchange, routing_key='#'),
] + [
    Queue(q_name, exchange, routing_key=q_name)
    for q_name in queues
])

Note: It is recommended that developers not alter this setting.

task_routes

Defaults to a function that will generate a dict with the routing_key matching the value at the first index of a task name split on the . and the exchange set to a kombu.Exchange object constructed from the task_default_exchange and task_default_exchange_type settings

Note: It is recommended that developers not alter this setting.

task_default_exchange

Defaults to 'task_exchange'

task_default_exchange_type

Defaults to 'topic'

task_track_started

Defaults to True.

Internal Variables

Below are arguments and environmental variables that can be used to customize the above provided configuration. By convention, all variables used to construct Celery configuration should should be written entirely uppercase. Unless otherwise stated, all variables may be specified via argument or environment variable (with preference given to argument).

QUEUES

This should contain an array of names for all service-related queues used by the Cadasta Platform. These values are used to construct the task_queues configuration. For the purposes of routing followup tasks, it’s important that every task consumer is aware of all queues available. For this reason, if a queue is used by any service worker then it should be specified within this array. It is not necessary to include the 'celery' or 'platform.fifo' queues. Defaults to the contents of the DEFAULT_QUEUES variable in the modules `__init__.py file </cadasta/workertoolbox/__init__.py>`__.

PLATFORM_QUEUE_NAME

Defaults to 'platform.fifo'.

Note: It is recommended that developers not alter this setting.

CHORD_UNLOCK_MAX_RETRIES

Used to set the maximum number of times a celery.chord_unlock task may retry before giving up. See celery/celery#2725. Defaults to 43200 (meaning to give up after 6 hours, assuming the default of the task’s default_retry_delay being set to 1 second).

SETUP_FILE_LOGGING

Controls whether a default logging configuration should be applied to the application. At a bare minimum, this includes:

  • creating a console log handler for INFO level logs

  • a file log handlers for INFO level logs, saved to app.info.log

  • a file log handlers for ERROR level logs, saved to app.error.log

Note: This may be useful for debugging, however in production it is recommended to simply log to stdout (as is the default setup of Celery)

SETUP_OPBEAT_LOGGING

Defaults to True if all required environment variables are set, otherwise False. Controls whether Opbeat logging handlers should be setup application. The following environment variables are required for Opbeat logging to be setup automatically: OPBEAT_ORGANIZATION_ID, OPBEAT_APP_ID, OPBEAT_SECRET_TOKEN. If all conditions are met, the following will be setup:

QUEUE_PREFIX

Used to populate the queue_name_prefix value of the connections broker_transport_options. Defaults to 'dev'.

RESULT_DB_USER

Used to populate the default result_backend template. Defaults to 'cadasta'.

RESULT_DB_PASS

Used to populate the default result_backend template. Defaults to 'cadasta'.

RESULT_DB_HOST

Used to populate the default result_backend template. Defaults to 'localhost'.

RESULT_DB_PORT

Used to populate the default result_backend template. Defaults to 'cadasta'.

RESULT_DB_NAME

Used to populate the default result_backend template. Defaults '5432'.

cadasta.workertoolbox.setup.setup_app

After the Celery application is provided a configuration object, there are other steups that must follow to properly configure the application. For example, the exchanges and queues described in the configuration must be declared. This function calls those required followup procedures. Typically, it is called automatically by the `worker_init <http://docs.celeryproject.org/en/latest/userguide/signals.html#worker-init>`__ signal, however it must be called manually by codebases that are run only as task producers or from within a Python shell.

It takes two arguments:

  • app - A Celery() app instance. Required

  • throw - Boolean stipulating if errors should be raise on failed setup. Otherwise, errors will simply be logged to the module logger at exception level. Optional, default: True

cadasta.workertoolbox.tests.build_functional_tests

When provided with a Celery app instance, this function generates a suite of functional tests to ensure that the provided application’s configuration and functionality conforms with the architecture of the Cadasta asynchronous system.

An example, where an instanciated and configured Celery() app instance exists in a parallel celery module:

from cadasta.workertoolbox.tests import build_functional_tests

from .celery import app

FunctionalTests = build_functional_tests(app)

To run these tests, use your standard test runner (e.g. pytest) or call manually from the command-line:

python -m unittest path/to/tests.py

Contributing

Testing

pip install -e .
pip install -r requirements-test.txt
./runtests

Deploying

pip install -r requirements-deploy.txt
python setup.py test clean build tag publish

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cadasta-workertoolbox-0.4.0.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

cadasta_workertoolbox-0.4.0-py2.py3-none-any.whl (19.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file cadasta-workertoolbox-0.4.0.tar.gz.

File metadata

File hashes

Hashes for cadasta-workertoolbox-0.4.0.tar.gz
Algorithm Hash digest
SHA256 fcae2defb43aa817d8e753c28af7646df6a202bc841911b7532622496c4678d0
MD5 f79818836f12251bac971525560725a8
BLAKE2b-256 f5f5b3a6ca2a41a5085d0b24f4a19e13a98f5842a534b5514c7aa008a74cf145

See more details on using hashes here.

Provenance

File details

Details for the file cadasta_workertoolbox-0.4.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for cadasta_workertoolbox-0.4.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 9c1ae17312a192b0dc8a49755f095e74dc9c03306ab045d181e51ef821e480d9
MD5 6f57faf04de5cf798bb0116d51c33c66
BLAKE2b-256 06ac9e959bcd217515dab52e9d486380d98756cbb0d36f17241d5aa363e68b85

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page