Waldorf, a distribution computing package based on celery
It can speed up algorithms such as Monte Carlo Tree Search (MCTS) by spreading concurrent sub-tasks, written as Python functions, across multiple machines and automating the collection of outputs. Waldorf can also be used to implement MapReduce-style work-flows.
Although Waldorf can be deployed on cloud servers, our emphasis at the moment is on utilizing the spare CPU capacity of a commodity PC cluster (e.g. normal office workstations). Support for GPUs may be included in a future release.
Waldorf uses a master node to pass messages between a client and slave nodes.
A client can create a task as a Python function on his or her local machine. Waldorf sends tasks to a network of slave machines for execution using the Celery task queue. When Celery is used on its own, tasks typically must be defined in advance, but Waldorf allows tasks to be defined dynamically without any slave restarts required.
Multiple clients can run their tasks simultaneously without conflict.
Clients can adjust how many CPU cores are used on slave machines to perform calculations. This can be done dynamically from the Waldorf administration webpage.
You can use Waldorf on any task that requires parallel computing.
One of its many uses is to compute rollouts in an MCTS simulation (for example, in game-playing AIs).
Here is a simple illustration:
def rollout(args): # Do one rollout ... def backup(result): # Backup and handle result ... def mcts_search(): for _ in range(iter): # Select action action = select() ... # Submit rollout job to waldorf client client.submit(rollout, args, callback=backup) ... # More code ...
For a more complex example, check out the gym demo.
Waldorf is still research code, so it may be slightly lacking in terms of documentation and support. Any feedback is welcomed.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for waldorf-0.6.0.post1-py3-none-any.whl