Inter process lock recipes that play nice with curator
Project description
A python port of the Shared Reentrant Read Write Lock recipe from curator. This package dependends on kazoo for handling the zookeeper bits.
At this point, you’re probably wondering why I didn’t just use the existing locks recipe from kazoo. The original goal was to make curator and kazoo respect each other’s locks such that:
Either could acquire a read lock, so long as neither held a write lock
Either could acquire a write lock, and block either from acquiring either read/write locks
I first attempted to patch the locks recipe in kazoo, but the internals are a bit different. (READ: I wasn’t able to make it work right).
The reason this is necessary (at least for me), is that some code was running Scala and using curator, and other code was running Python using kazoo.
Installation
pip install kazurator==0.2.0
Usage
There are two main use cases for this package. Both of them relate to creating an inter-process critical region.
Example of Interop with Curator
See the example directory.
Inter Process Mutex
To start, let’s take a look at how we could implement a simple shared (across process) mutex:
from kazoo.client import KazooClient
from kazurator import Mutex
def main():
client = KazooClient(hosts="YOUR_ZK_CONNECT_STRING_HERE")
client.start()
mutex = Mutex(client, "/some/path")
with mutex:
# do your thread-safe thing here
client.stop()
This example assumes you want a single thread to be in the critical region. In order to support simultaneous multi-threaded access, you can set the max_leases kwarg to a higher number. For example:
mutex = Mutex(client, "/some/path", max_leases=2) # 2 thread at a time
Also, if you’d rather not use the content management protocol, you can call acquire and release directly.
Inter Process Read Write Lock
In some cases, you’ll need to support an unlimited number of read locks, but only a single write lock. For example, suppose you were processing some HDFS paths by altering the format and replacing the data (totally hypothetical of course :smile:).
You’d want any consumers of the data to acquire a read lock. This would prevent the altering process from acquiring a write lock until the consumer(s) are finished. Similarly, the consumers wouldn’t be able to acquire read locks until the altering process removes the write lock.
Consumers will block until the lock is available, or timeout after the specified timeout (default is 1s), at which point a kazoo.LockTimeout is raised.
from kazoo.client import KazooClient
from kazurator import ReadWriteLock
def main():
client = KazooClient(hosts="YOUR_ZK_CONNECT_STRING_HERE")
client.start()
# can optionally supply `timeout` kwarg as well
lock = ReadWriteLock(client, "/some/path")
with lock.write_lock: # block until write_lock is available
# do your thread-safe thing here
with lock.read_lock: # block until write_lock is gone
# do your thread-safe thing here
client.stop()
Development
Clone this repo and pip install -r requirements.txt
Run tests script/test nosetests
Running the tests will spawn a docker container to run zookeeper in. It will be shutdown automatically at the end of the run
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file kazurator-0.2.0.tar.gz
.
File metadata
- Download URL: kazurator-0.2.0.tar.gz
- Upload date:
- Size: 9.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 21b860f067ae24bc41bf0f0026489daef1324db0b101b3e974c0cfa2d9aaf240 |
|
MD5 | fe72b316e8dc143b9f67a393739bc7ea |
|
BLAKE2b-256 | 1eeaff05e74d950cf168606cef092e63d8777895ed1627a4b445258c65771a39 |
File details
Details for the file kazurator-0.2.0-py2.py3-none-any.whl
.
File metadata
- Download URL: kazurator-0.2.0-py2.py3-none-any.whl
- Upload date:
- Size: 14.5 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2208a597e7c916c758e05f9fdc366ebf034802975547240311cae53a2d50b316 |
|
MD5 | 6427d07c2eb438854880f85ed10259bb |
|
BLAKE2b-256 | 5f415e16c16214ba437eb865c2f86547942a505ede1492651a730b09f54d0398 |