A watcher for multiprocessing queues and processes, with a bunch of handy methods to deal with parallel execution.
This package is a watcher for multiprocessing queues and processes, and contains a series of handy methods to deal with parallel execution.
The multiwatch.run(processes, queues, exit_event, sleep, report_interval, output) function handles the lifespan of the processes. On regular intervals indicated by report_interval (in seconds), it saves to a file the information about the CPU and memory usage, the different processes running, and the number of elements in a queue.
The individual processes defined through the multiwatch.RunnerProcess class can also specify a retry strategy. When they fail, the runner will restart them automatically.
multiwatch.setup_sigterm(sigterm_event, exit_event) function ensures the processes are terminated on SIGINT and SIGTERM signals.
Any process can also set the exit_event which will propagate to every other processes an instruction to terminate what they are doing.
A common pattern I am using in my applications consists of reading lines from stdin, transforming them, and adding them to a queue in order for other processes to process them.
This can be done in one line of code with multiwatch.read_file_into_queue(file, exit_event, queue, transform). It works in a non-blocking way, that is, the application can still be gracefully terminated through SIGINT and SIGTERM signals (or when it sets the exit_event). In a dedicated article, I explain how this is done, and why the common alternatives don’t work.
If you want to have SVN access to the official repository in order to contribute to the project, contact me at firstname.lastname@example.org. If you find it more convinient to clone the source to GitHub, you can do that too.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for multiwatch-1.0.7-py2.py3-none-any.whl