Skip to main content

Simple data processing tool.

Project description

Datapiper provides a flexible easy-to-use library for constructing and running simple data batch processing pipelines.

Give Datapiper your list of data processing callables and it will construct a runnable data pipeline for you.

If you instantiate the pipe with a (iterable) data source, you get a generator that reads from a source and outputs processed data for you:

>>> operations = [lambda context, data: data+1]
>>> datasource = [1,2,3]
>>> p = Piper(operations, source=datasource)
>>> print p
pipe: source > <lambda>
>>> [r for r in p]
[2,3,4]

If you instead instantiate it with a (callable) data sink, you get a coroutine that accepts data from a producer and delivers processed data to a sink:

>>> operations = [lambda context, data: data+1]
>>> results = []
>>> def datasink(data):
...    results.append(data)
>>> p = Piper(operations, sink=datasink)
>>> print p
pipe: <lambda> > sink
>>> for v in (1,2,3):
...    p.send(v)
...
>>> results
[2,3,4]

The context parameter passed to the data operations callables is meant for sharing state between them. It can be initialized to desired value(s) by passing it to the Piper class as a (optional) keyword argument. The context parameter can be anything; a dictionary is recommended.

Please see the tests for more examples.

History

0.1.0 (2017-10-31)

  • First release on PyPI.

Project details


Release history Release notifications

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
datapiper-0.1.0.zip (23.7 kB) Copy SHA256 hash SHA256 Source None Oct 31, 2017

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page