Inter process lock recipes that play nice with curator
Project description
A python port of the Shared Reentrant Read Write Lock recipe from curator. This package dependends on kazoo for handling the zookeeper bits.
At this point, you’re probably wondering why I didn’t just use the existing locks recipe from kazoo. The original goal was to make curator and kazoo respect each other’s locks such that:
Either could acquire a read lock, so long as neither held a write lock
Either could acquire a write lock, and block either from acquiring either read/write locks
I first attempted to patch the locks recipe in kazoo, but the internals are a bit different. (READ: I wasn’t able to make it work right).
The reason this is necessary (at least for me), is that some code was running Scala and using curator, and other code was running Python using kazoo.
Installation
pip install kazurator==0.1.1
Usage
There are two main use cases for this package. Both of them relate to creating an inter-process critical region.
Inter Process Mutex
To start, let’s take a look at how we could implement a simple shared (across process) mutex:
from kazoo.client import KazooClient
from kazurator import Mutex
def main():
client = KazooClient(hosts="YOUR_ZK_CONNECT_STRING_HERE")
client.start()
mutex = Mutex(client, "/some/path")
with mutex:
# do your thread-safe thing here
client.stop()
This example assumes you want a single thread to be in the critical region. In order to support simultaneous multi-threaded access, you can set the max_leases kwarg to a higher number. For example:
mutex = Mutex(client, "/some/path", max_leases=2) # 2 thread at a time
Also, if you’d rather not use the content management protocol, you can call acquire and release directly.
Inter Process Read Write Lock
In some cases, you’ll need to support an unlimited number of read locks, but only a single write lock. For example, suppose you were processing some HDFS paths by altering the format and replacing the data (totally hypothetical of course :smile:).
You’d want any consumers of the data to acquire a read lock. This would prevent the altering process from acquiring a write lock until the consumer(s) are finished. Similarly, the consumers wouldn’t be able to acquire read locks until the altering process removes the write lock.
Consumers will block until the lock is available, or timeout after the specified timeout (default is 1s), at which point a kazoo.LockTimeout is raised.
from kazoo.client import KazooClient
from kazurator import ReadWriteLock
def main():
client = KazooClient(hosts="YOUR_ZK_CONNECT_STRING_HERE")
client.start()
# can optionally supply `timeout` kwarg as well
lock = ReadWriteLock(client, "/some/path")
with lock.write_lock: # block until write_lock is available
# do your thread-safe thing here
with lock.read_lock: # block until write_lock is gone
# do your thread-safe thing here
client.stop()
Development
Clone this repo and pip install -r requirements.txt
Run tests script/test nosetests
Running the tests will spawn a docker container to run zookeeper in. It will be shutdown automatically at the end of the run
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for kazurator-0.1.1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1d78360e0d076bce38aa2c2fa70209ea636db1c48e21ca98915b936383cf6653 |
|
MD5 | ca1e82b484444e2b62bc1009cdb43e12 |
|
BLAKE2b-256 | 27fd101cd55b2b583da3fc79b4c674b83cd0b7766e9786090e0eb0b9bd7bbb5c |