Skip to main content

a pipeline framework for streaming processing

Project description

https://badge.fury.io/py/tanbih-pipeline.svg Documentation Status

a flexible stream processing framework supporting RabbitMQ, Pulsar, Kafka and Redis.

Features

  • at-least-once guaranteed with acknowledgement on every message

  • horizontally scalable through consumer groups

  • flow is controlled in deployment, develop it once, use it everywhere

  • testability provided with FILE and MEMORY input/output

Parameters

  • kind - specify the underlining technology for pipeline, for example, KAFKA or RabbitMQ

  • MEM - Memory based queue (good for unittests)

  • FILE - File based queue (good for development and integration tests)

Generator

Generator is to be used when developing a data source in our pipeline. A source will produce output without input. A crawler can be seen as a generator.

>>> from pipeline import Generator, Message
>>>
>>> class MyGenerator(Generator):
...     def generate(self):
...         for i in range(10):
...             yield {'id': i}
>>>
>>> generator = MyGenerator('generator', '0.1.0', description='simple generator')
>>> generator.parse_args("--kind MEM --out-topic test".split())
>>> generator.start()
>>> [r.get('id') for r in generator.destination.results]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

Processor

Processor is to be used to process input. Modification will be in-place. A processor can produce one output for each input, or no output.

>>> from pipeline import Processor, Message
>>>
>>> class MyProcessor(Processor):
...     def process(self, msg):
...         msg.update({'processed': True})
...         return None
>>>
>>> processor = MyProcessor('processor', '0.1.0', description='simple processor')
>>> config = {'data': [{'id': 1}]}
>>> processor.parse_args("--kind MEM --in-topic test --out-topic test".split(), config=config)
>>> processor.start()
>>> [r.get('id') for r in processor.destination.results]
[1]

Splitter

Splitter is to be used when writing to multiple outputs. It will take a function to generate output topic based on the processing message, and use it when writing output.

>>> from pipeline import Splitter, Message
>>>
>>> class MySplitter(Splitter):
...     def get_topic(self, msg):
...         return '{}-{}'.format(self.destination.topic, msg.get('id'))
...
...     def process(self, msg):
...         msg.update({
...             'processed': True,
...         })
...         return None
>>>
>>> splitter = MySplitter('splitter', '0.1.0', description='simple splitter')
>>> config = {'data': [{'id': 1}]}
>>> splitter.parse_args("--kind MEM --in-topic test --out-topic test".split(), config=config)
>>> splitter.start()
>>> [r.get('id') for r in splitter.destinations['test-1'].results]
[1]

Usage

Writing a Worker

Choose Generator, Processor or Splitter to subclass from.

Environment Variables

Application accepts following environment variables:

environment variable

command line argument

options

PIPELINE

–kind

KAFKA, PULSAR, FILE

PULSAR

–pulsar

pulsar url

TENANT

–tenant

pulsar tenant

NAMESPACE

–namespace

pulsar namespace

SUBSCRIPTION

–subscription

pulsar subscription

KAFKA

–kafka

kafka url

GROUPID

–group-id

kafka group id

INTOPIC

–in-topic

topic to read

OUTTOPIC

–out-topic

topic to write to

Custom Code

Define add_arguments to add new arguments to worker.

Define setup to run initialization code before worker starts processing messages. setup is called after command line arguments have been parsed. Logic based on options (parsed arguments) goes here.

Options

Errors

The value None above is error you should return if dct or dcts is empty. Error will be sent to topic errors with worker information.

Contribute

Use pre-commit to run black and flake8

Credits

Yifan Zhang (yzhang at hbku.edu.qa)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tanbih-pipeline-0.7.6.tar.gz (191.7 kB view details)

Uploaded Source

Built Distribution

tanbih_pipeline-0.7.6-py3-none-any.whl (448.3 kB view details)

Uploaded Python 3

File details

Details for the file tanbih-pipeline-0.7.6.tar.gz.

File metadata

  • Download URL: tanbih-pipeline-0.7.6.tar.gz
  • Upload date:
  • Size: 191.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.8.2

File hashes

Hashes for tanbih-pipeline-0.7.6.tar.gz
Algorithm Hash digest
SHA256 39856cafc30a69369aa38a9a6524ef000d1d49e3c5e3bea6966c8b58b2f8390d
MD5 47092d11cfadc7ab84fe4c22bb2fe3b4
BLAKE2b-256 1d3df1dc99bbb39fdefa0310f132bc791dba7cfeef00350bfc31c46ddc48a319

See more details on using hashes here.

File details

Details for the file tanbih_pipeline-0.7.6-py3-none-any.whl.

File metadata

  • Download URL: tanbih_pipeline-0.7.6-py3-none-any.whl
  • Upload date:
  • Size: 448.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.8.2

File hashes

Hashes for tanbih_pipeline-0.7.6-py3-none-any.whl
Algorithm Hash digest
SHA256 4296c71e12f9a64b7bcf54689d3fd30c1fc9779a8f535a7965fa3b7d4428d515
MD5 6bf5af4df22bfe3f7570fc469bc041e9
BLAKE2b-256 cfe23cc474c50441b046ebf2699ce85147119a94b0c9fe2d0d49e07369cee066

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page