Skip to main content

Data-driven programming framework

Project description

=======
Databot
=======

* Data-driven programming framework
* Paralleled in coroutines and ThreadPool
* Type- and content-based route function


Installing
----------

Install and update using ``pip``:

`pip install -U databot`

Documentation
------------

http://databot.readthedocs.io

Discuss:
-------
https://groups.google.com/forum/#!forum/databotpy


What's data-driven programming?
===============================

All functions are connected by pipes (queues) and communicate by data.

When data come in, the function will be called and return the result.

Think about the pipeline operation in unix: ``ls|grep|sed``.

Benefits:

#. Decouple data and functionality
#. Easy to reuse

Databot provides pipe and route. It makes data-driven programming and powerful data flow processes easier.


Databot is...
=============

- **Simple**

Databot is easy to use and maintain, *does not need configuration files*, and knows about ``asyncio`` and how to parallelize computation.

Here's one of the simple applications you can make:

_Load the price of Bitcoin every 2 seconds. Advantage price aggregator sample can be found `here <https://github.com/kkyon/databot/tree/master/examples>`_.


.. code-block:: python

from databot import Pipe,Timer,BotFrame,HttpLoader,Bot

def main():
Pipe(


Timer(delay=2), # send timer data to pipe every 2 seconds
"http://api.coindesk.com/v1/bpi/currentprice.json", # send url to pipe when timer trigger
HttpLoader(), # read url and load http response
lambda r:r.json['bpi']['USD']['rate_float'], # read http response and parse as json
print, # print out

)

Bot.render('simple_bitcoin_price')
Bot.run()

main()


- **flow graph**
With render function:
`BotFrame.render('bitcoin_arbitrage')`
databot will render the data flow network into a graphviz image.
below is the flow graph generated by databot.Aggreate 6 exchanges bitcoin price for trading.


.. image:: docs/bitcoin_arbitrage.png
:width: 400




- **Fast**
Nodes will be run in parallel, and they will perform well when processing stream data.



- **Replay-able**

With replay mode enabled:
``config.replay_mode=True``
when an exception is raised at step N, you don't need to run from setup 1 to N.
Databot will replay the data from nearest completed node, usually step N-1.
It will save a lot of time in the development phase.

Release
=======

:**0.1.8**: http://docs.botflow.org/en/latest/change/0.1.8.html .:

#. Support parallel in ThreadPool for slow function.

#. Loop Node is deprecated. raw value and Iterable value can be used directly.

#. improve performance of BlockedJoin

:**0.1.7**:




More about Databot
===============

Data-driven programming is typically applied to streams of structured data for filtering, transforming, aggregating (such as computing statistics), or calling other programs.

Databot has a few basic concepts to implement Data-driven programming .

- **Pipe**
It is the main stream process of the program. All units will work inside.
- **Node**
It is callable unit.Any callable function and object can work as Node. It is driven by data. Custom functions work as Nodes.
There are some built-in nodes:
.. role:: strike
* **Loop**: Works as a **for** loop

* **Timer**: It will send a message in the pipe by timer param. **delay**, **max_time** **until** some finished
* **HttpLoader**: Get a url and return the HTTP response
* **MySQL query or insert**: For mysql querying and insert
* **File read/write**: for file I/O.
- **Route**
It will be used to create a complex data flow network, not just one main process. Databot can nest Routes inside Routes.
It is a powerful concept.
There are some pre built-in Route:
* **Branch** : Duplicate data from parent pipe to a branch.
* **Return** : Duplicate data from parent pipe, and return final result to parent pipe.
* **Filter** : Drop data from pipe if it does not match some condition
* **Fork** : Duplicate data to many branches.
* **Join** : Duplicate data to many branches, and return result to pipe.
* **BlockedJoin** : Wait for all branched to finish and merged the result into a tuple.

All units (Pipe, Node, Route) communicate via queues and perform parallel computation in coroutines.
This is abstracted so that Databot can be used with only limited knowledge of ``asyncio``.

Below some graphs will get you some basic concept for the Route:
branch:https://github.com/kkyon/databot/blob/master/docs/route/databot_branch.jpg
fork:https://github.com/kkyon/databot/blob/master/docs/route/databot_fork.jpg
join:https://github.com/kkyon/databot/blob/master/docs/route/databot_join.jpg
return:https://github.com/kkyon/databot/blob/master/docs/route/databot_return.jpg


Contributing
------------


Donate
------


Links
-----


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

botflow-0.1.8.tar.gz (17.3 kB view hashes)

Uploaded Source

Built Distribution

botflow-0.1.8-py3-none-any.whl (18.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page