Python multiprocessing fork with improvements and bugfixes
billiard is a fork of the Python 2.7 multiprocessing
package. The multiprocessing package itself is a renamed and updated version of
R Oudkerk’s pyprocessing package.
This standalone variant is intended to be compatible with Python 2.4 and 2.5,
and will draw it’s fixes/improvements from python-trunk.
- This package would not be possible if not for the contributions of not only the current maintainers but all of the contributors to the original pyprocessing package listed here
- Also it is a fork of the multiprocessin backport package by Christian Heims.
- It includes the no-execv patch contributed by R. Oudkerk.
- And the Pool improvements previously located in Celery.
188.8.131.52 - 2012-09-26
- Fixes typo
184.108.40.206 - 2012-09-26
- Windows: Fixes for SemLock._rebuild (Issue #24).
- Pool: Job terminated with terminate_job now raises billiard.exceptions.Terminated.
220.127.116.11 - 2012-09-21
- Windows: Fixes unpickling of SemLock when using fallback.
- Windows: Fixes installation when no C compiler.
18.104.22.168 - 2012-09-20
- Installation now works again for Python 3.
22.214.171.124 - 2012-09-14
Merged with Python trunk (many authors, many fixes: see Python changelog in trunk).
Using execv now also works with older Django projects using setup_environ (Issue #10).
Billiard now installs with a warning that the C extension could not be built if a compiler is not installed or the build fails in some other way.
It really is recommended to have the C extension installed when running with force execv, but this change also makes it easier to install.
Pool: Hard timeouts now sends KILL shortly after TERM so that C extensions cannot block the signal.
Python signal handlers are called in the interpreter, so they cannot be called while a C extension is blocking the interpreter from running.
Now uses a timeout value for Thread.join that doesn’t exceed the maximum on some platforms.
Fixed bug in the SemLock fallback used when C extensions not installed.
Fix contributed by Mher Movsisyan.
Pool: Now sets a Process.index attribute for every process in the pool.
This number will always be between 0 and concurrency-1, and can be used to e.g. create a logfile for each process in the pool without creating a new logfile whenever a process is replaced.
126.96.36.199 - 2012-08-05
- Fixed Python 2.5 compatibility issue.
- New Pool.terminate_job(pid) to terminate a job without raising WorkerLostError
188.8.131.52 - 2012-08-01
Adds support for FreeBSD 7+
Fix contributed by koobs.
Pool: New argument allow_restart is now required to enable the pool process sentinel that is required to restart the pool.
It’s disabled by default, which reduces the number of file descriptors/semaphores required to run the pool.
Pool: Now emits a warning if a worker process exited with error-code.
But not if the error code is 155, which is now returned if the worker process was recycled (maxtasksperchild).
Python 3 compatibility fixes.
Python 2.5 compatibility fixes.
184.108.40.206 - 2012-06-26
- The TimeLimitExceeded exception string representation only included the seconds as a number, it now gives a more human friendly description.
- Fixed typo in LaxBoundedSemaphore.shrink.
- Pool: ResultHandler.handle_event no longer requires any arguments.
- setup.py bdist now works
220.127.116.11 - 2012-06-03
- Environment variable MP_MAIN_FILE envvar is now set to the path of the __main__ module when execv is enabled.
- Pool: Errors occurring in the TaskHandler are now reported.
18.104.22.168 - 2012-06-01
Can now be installed on Py 3.2
Issue #12091: simplify ApplyResult and MapResult with threading.Event
Patch by Charles-Francois Natali
Pool: Support running without TimeoutHandler thread.
The with_*_thread arguments has also been replaced with a single
Two new pool callbacks:
on_timeout_set(job, soft, hard)
Applied when a task is executed with a timeout.
Applied when a timeout is cancelled (the job completed)
22.214.171.124 - 2012-05-21
- Fixes Python 2.5 support.
126.96.36.199 - 2012-05-21
Pool: Can now be used in an event loop, without starting the supporting threads (TimeoutHandler still not supported)
To facilitate this the pool has gained the following keyword arguments:
Callback called with Process instance as argument whenever a new worker process is added.
Used to add new process fds to the eventloop:
def on_process_up(proc): hub.add_reader(proc.sentinel, pool.maintain_pool)
Callback called with Process instance as argument whenever a new worker process is found dead.
Used to remove process fds from the eventloop:
def on_process_down(proc): hub.remove(proc.sentinel)
Sets the semaphore used to protect from adding new items to the pool when no processes available. The default is a threaded one, so this can be used to change to an async semaphore.
And the following attributes:
- ``readers`` A map of ``fd`` -> ``callback``, to be registered in an eventloop. Currently this is only the result outqueue with a callback that processes all currently incoming results.
And the following methods:
- ``did_start_ok`` To be called after starting the pool, and after setting up the eventloop with the pool fds, to ensure that the worker processes didn't immediately exit caused by an error (internal/memory). - ``maintain_pool`` Public version of ``_maintain_pool`` that handles max restarts.
Pool: Process too frequent restart protection now only counts if the process had a non-successful exit-code.
This to take into account the maxtasksperchild option, and allowing processes to exit cleanly on their own.
Pool: New options max_restart + max_restart_freq
This means that the supervisor can’t restart processes faster than max_restart’ times per max_restart_freq seconds (like the Erlang supervisor maxR & maxT settings).
The pool is closed and joined if the max restart frequency is exceeded, where previously it would keep restarting at an unlimited rate, possibly crashing the system.
The current default value is to stop if it exceeds 100 * process_count restarts in 1 seconds. This may change later.
It will only count processes with an unsuccessful exit code, this is to take into account the maxtasksperchild setting and code that voluntarily exits.
Pool: The WorkerLostError message now includes the exit-code of the process that disappeared.
188.8.131.52 - 2012-05-09
Now always cleans up after sys.exc_info() to avoid cyclic references.
ExceptionInfo without arguments now defaults to sys.exc_info.
Forking can now be disabled using the MULTIPROCESSING_FORKING_DISABLE environment variable.
Also this envvar is set so that the behavior is inherited after execv.
The semaphore cleanup process started when execv is used now sets a useful process name if the setproctitle module is installed.
Sets the FORKED_BY_MULTIPROCESSING environment variable if forking is disabled.
184.108.40.206 - 2012-04-27
Raises NotImplementedError if the platform does not support multiprocessing (e.g. Jython).
220.127.116.11 - 2012-04-23
- PyPy now falls back to using its internal _multiprocessing module, so everything works except for forking_enable(False) (which silently degrades).
- Fixed Python 2.5 compat. issues.
- Uses more with statements
- Merged some of the changes from the Python 3 branch.
18.104.22.168 - 2012-04-20
- Now installs on PyPy/Jython (but does not work).
22.214.171.124 - 2012-04-20
- Python 2.5 support added.
126.96.36.199 - 2012-04-20
- Updated from Python 2.7.3
- Python 2.4 support removed, now only supports 2.5, 2.6 and 2.7. (may consider py3k support at some point).
- Pool improvements from Celery.
- no-execv patch added (http://bugs.python.org/issue8713)