Skip to main content

More Threads! Simpler and faster threading.

Project description

More Threads!

Module threads

The main benefits over Python's threading library is:

  1. Multi-threaded queues do not use serialization - Serialization is great in the general case, where you may also be communicating between processes, but it is a needless overhead for single-process multi-threading. It is left to the programmer to ensure the messages put on the queue are not changed, which is not ominous demand.
  2. Shutdown order is deterministic and explicit - Python's threading library is missing strict conventions for controlled and orderly shutdown. These conventions eliminate the need for interrupt() and abort(), both of which are unstable idioms when using resources. Each thread can shutdown on its own terms, but is expected to do so expediently.
  • All threads are required to accept a please_stop signal; are expected to test it in a timely manner; and expected to exit when signalled.
  • All threads have a parent - The parent is responsible for ensuring their children get the please_stop signal, and are dead, before stopping themselves. This responsibility is baked into the thread spawning process, so you need not deal with it unless you want.
  1. Uses Signals to simplify logical dependencies among multiple threads, events, and timeouts.
  2. Logging and Profiling is Integrated - Logging and exception handling is seamlessly integrated: This means logs are centrally handled, and thread safe. Parent threads have access to uncaught child thread exceptions, and the cProfiler properly aggregates results from the multiple threads.

What's it used for

A good amount of time is spent waiting for underlying C libraries and OS services to respond to network and file access requests. Multiple threads can make your code faster, despite the GIL, when dealing with those requests. For example, by moving logging off the main thread, we can get up to 15% increase in overall speed because we no longer have the main thread waiting for disk writes or remote logging posts. Please note, this level of speed improvement can only be realized if there is no serialization happening at the multi-threaded queue.

Asynch vs. Actors

My personal belief is that actors are easier to reason about than asynch tasks. Mixing regular methods and co-routines (with their yield from pollution) is dangerous because:

  1. calling styles between methods and co-routines can be easily confused
  2. actors can use blocking methods, co-routines can not
  3. there is no way to manage resource priority with co-routines.
  4. stack traces are lost with co-routines
  5. asynch scope easily escapes lexical scope, which promotes bugs

Python's asynch efforts are a still-immature re-invention of threading functionality by another name. Expect to experience a decade of problems that are already solved by threading; here is an example

Synchronization Primitives

There are three major aspects of a synchronization primitive:

  • Resource - Monitors and locks can only be owned by one thread at a time
  • Binary - The primitive has only two states
  • Irreversible - The state of the primitive can only be set, or advanced, never reversed

The last, irreversibility is very useful, but ignored in many threading libraries. The irreversibility allows us to model progression; and we can allow threads to poll for progress, or be notified of progress.

These three aspects can be combined to give us 8 synchronization primitives:

  • - - - - Semaphore
  • - B - - Binary Semaphore
  • R - - - Monitor
  • R B - - Lock
  • - - I - Iterator/generator
  • - B I - Signal
  • R - I - Private Iterator
  • R B I - Private Signal (best implemented as is_done Boolean flag)

Lock Class

Locks are identical to threading monitors, except for two differences:

  1. The wait() method will always acquire the lock before returning. This is an important feature, it ensures every line inside a with block has lock acquisition, and is easier to reason about.
  2. Exiting a lock via __exit__() will always signal a waiting thread to resume. This ensures no signals are missed, and every thread gets an opportunity to react to possible change.
    lock = Lo
    while not please_stop:
        with lock:
            while not todo:
                lock.wait(seconds=1)
            # DO SOME WORK

In this example, we look for stuff todo, and if there is none, we wait for a second. During that time others can acquire the lock and add todo items. Upon releasing the the lock, our example code will immediately resume to see what's available, waiting again if nothing is found.

Signal Class

The Signal class is a binary semaphore that can be signalled only once; subsequent signals have no effect. It can be signalled by any thread; any thread can wait on a Signal; and once signalled, all waiting threads are unblocked, including all subsequent waiting threads. A Signal's current state can be accessed by any thread without blocking. Signal is used to model thread-safe state advancement. It initializes to False, and when signalled (with go()) becomes True. It can not be reversed.

Both are like a Promise, but focused on third party manipulation.

Signal Promise
s.go() s.resolve()
s.on_go(f) s.then(m)
s.wait() await s
s & t Promise.all(s, t)
s t
is_done = Signal()
yield is_done  # give signal to another that wants to know when done
# DO WORK
is_done.go()

You can attach methods to a Signal, which will be run, just once, upon go(). If already signalled, then the method is run immediately.

is_done = Signal()
is_done.on_go(lambda: print("done"))
return is_done

You may also wait on a Signal, which will block the current thread until the Signal is a go

is_done = worker_thread.stopped
is_done.wait()
is_done = print("worker thread is done")

Signals are first class, they can be passed around and combined with other Signals. For example, using the __or__ operator (|): either = lhs | rhs; either will be triggered when lhs or rhs is triggered.

def worker(please_stop):
    while not please_stop:
        #DO WORK 

user_cancel = get_user_cancel_signal()
worker(user_cancel | Till(seconds=360))

Signals can also be combined using logical and (&): both = lhs & rhs; both is triggered only when both lhs and rhs are triggered:

(workerA.stopped & workerB.stopped).wait()
print("both threads are done")

Till Class

The Till class is a special Signal used to represent timeouts.

Till(seconds=20).wait()
Till(till=Date("21 Jan 2016").unix).wait()

Use Till rather than sleep() because you can combine Till objects with other Signals.

Beware that all Till objects will be triggered before expiry when the main thread is asked to shutdown

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mo-threads-2.43.19055.tar.gz (21.2 kB view details)

Uploaded Source

File details

Details for the file mo-threads-2.43.19055.tar.gz.

File metadata

  • Download URL: mo-threads-2.43.19055.tar.gz
  • Upload date:
  • Size: 21.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.25.0 CPython/2.7.14

File hashes

Hashes for mo-threads-2.43.19055.tar.gz
Algorithm Hash digest
SHA256 8323b74879269262ebf399e521dffc1fdc2214ce6c54de064c840d64062c7c09
MD5 0678aa26c2dc85dc38fd3bdaa6464798
BLAKE2b-256 a0a28dd7dedc6afa36cf528eb8e66a6a970a9c6f4a87b3683f0b8ecb158acefc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page