ArchiveTeam seesaw kit
An asynchronous toolkit for distributed web processing. Written in Python and named after its behavior, it supports concurrent downloads, uploads, etc.
Requires Python 2 or 3.
Needs the Tornado library for event-driven I/O. The complete list of Python modules needed are listed in requirements.txt.
How to try it out
To run the example pipeline:
sudo pip install -r requirements.txt ./run-pipeline --help ./run-pipeline examples/example-pipeline.py someone
Point your browser to
You can also use
run-pipeline3 to be explicit for the Python version.
General idea: a set of
Tasks that can be combined into a
Pipeline that processes
Itemis a thing that needs to be downloaded (a user, for example). It has properties that are filled by the
Taskis a step in the download process: it takes an item, does something with it and passes it on. Example Tasks: getting an item name from the tracker, running a download script, rsyncing the result, notifying the tracker that it's done.
Pipelinerepresents a sequence of
Tasks. To make a seesaw script for a new project you'd specify a new
Task can work on multiple
Items at a time (e.g., multiple Wget downloads). The concurrency can be limited by wrapping the task in a
Task: this will queue the items and run them one-by-one (e.g., a single Rsync upload).
Pipeline needs to be fed empty
Item objects; by controlling the number of active
Items you can limit the number of items. (For example, add a new item each time an item leaves the pipeline.)
ConfigValue classes it is possible to pass item-specific arguments to the
Task objects. The value of these objects will be re-evaluated for each item. Examples: a path name that depends on the item name, a configurable bandwidth limit, the number of concurrent downloads.
Consult the wiki for more information.