Skip to main content

dflockd python client

Project description

dflockd-client

A Python client library for dflockd — a lightweight distributed lock server with FIFO ordering, automatic lease expiry, and background renewal.

Read the docs here

Installation

pip install dflockd-client

Or with uv:

uv add dflockd-client

Quick start

Async client

import asyncio
from dflockd_client.client import DistributedLock

async def main():
    async with DistributedLock("my-key", acquire_timeout_s=10) as lock:
        print(lock.token, lock.lease)
        # critical section — lease auto-renews in background

asyncio.run(main())

Sync client

from dflockd_client.sync_client import DistributedLock

with DistributedLock("my-key", acquire_timeout_s=10) as lock:
    print(lock.token, lock.lease)
    # critical section — lease auto-renews in background thread

Manual acquire/release

Both clients support explicit acquire() / release() outside of a context manager:

from dflockd_client.sync_client import DistributedLock

lock = DistributedLock("my-key")
if lock.acquire():
    try:
        pass  # critical section
    finally:
        lock.release()

Two-phase lock acquisition

The enqueue() / wait() methods split lock acquisition into two steps, allowing you to notify an external system after joining the queue but before blocking:

from dflockd_client.sync_client import DistributedLock

lock = DistributedLock("my-key")
status = lock.enqueue()       # join queue, returns "acquired" or "queued"
notify_external_system()      # your application logic here
if lock.wait(timeout_s=10):   # block until granted (no-op if already acquired)
    try:
        pass  # critical section
    finally:
        lock.release()

Async equivalent:

lock = DistributedLock("my-key")
status = await lock.enqueue()
await notify_external_system()
if await lock.wait(timeout_s=10):
    try:
        pass  # critical section
    finally:
        await lock.release()

Parameters

Parameter Default Description
key (required) Lock name
acquire_timeout_s 10 Seconds to wait for lock acquisition
lease_ttl_s None (server default) Lease duration in seconds
servers [("127.0.0.1", 6388)] List of (host, port) tuples
sharding_strategy stable_hash_shard Callable[[str, int], int] — maps (key, num_servers) to server index
renew_ratio 0.5 Renew at lease * ratio seconds
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP

TLS

To connect to a TLS-enabled dflockd server, pass an ssl.SSLContext:

import ssl
from dflockd_client.sync_client import DistributedLock

ctx = ssl.create_default_context()  # uses system CA bundle
# or: ctx = ssl.create_default_context(cafile="/path/to/ca.pem")

with DistributedLock("my-key", ssl_context=ctx) as lock:
    print(lock.token, lock.lease)

Async equivalent:

import ssl
from dflockd_client.client import DistributedLock

ctx = ssl.create_default_context()

async with DistributedLock("my-key", ssl_context=ctx) as lock:
    print(lock.token, lock.lease)

Both DistributedLock and DistributedSemaphore accept ssl_context in the async and sync clients.

Semaphores

DistributedSemaphore allows up to N concurrent holders per key, using the same API patterns as DistributedLock:

from dflockd_client.sync_client import DistributedSemaphore

# Allow up to 3 concurrent workers on this key
with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
    print(sem.token, sem.lease)
    # critical section — up to 3 holders at once

Async equivalent:

from dflockd_client.client import DistributedSemaphore

async with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
    print(sem.token, sem.lease)

Manual acquire/release and two-phase (enqueue() / wait()) work the same as locks.

Parameters

Parameter Default Description
key (required) Semaphore name
limit (required) Maximum concurrent holders
acquire_timeout_s 10 Seconds to wait for acquisition
lease_ttl_s None (server default) Lease duration in seconds
servers [("127.0.0.1", 6388)] List of (host, port) tuples
sharding_strategy stable_hash_shard Callable[[str, int], int] — maps (key, num_servers) to server index
renew_ratio 0.5 Renew at lease * ratio seconds
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP

Stats

Query server state (connections, held locks, active semaphores) using the low-level stats() function:

import asyncio
from dflockd_client.client import stats

async def main():
    reader, writer = await asyncio.open_connection("127.0.0.1", 6388)
    result = await stats(reader, writer)
    print(result)
    # {'connections': 1, 'locks': [], 'semaphores': [], 'idle_locks': [], 'idle_semaphores': []}
    writer.close()
    await writer.wait_closed()

asyncio.run(main())

Sync equivalent:

import socket
from dflockd_client.sync_client import stats

sock = socket.create_connection(("127.0.0.1", 6388))
rfile = sock.makefile("r", encoding="utf-8")
result = stats(sock, rfile)
print(result)
rfile.close()
sock.close()

Returns a dict with connections, locks, semaphores, idle_locks, and idle_semaphores.

Multi-server sharding

When running multiple dflockd instances, the client can distribute keys across servers using consistent hashing. Each key always routes to the same server.

from dflockd_client.sync_client import DistributedLock

servers = [("server1", 6388), ("server2", 6388), ("server3", 6388)]

with DistributedLock("my-key", servers=servers) as lock:
    print(lock.token, lock.lease)

The default strategy uses zlib.crc32 for stable, deterministic hashing. You can provide a custom strategy:

from dflockd_client.sync_client import DistributedLock

def my_strategy(key: str, num_servers: int) -> int:
    """Route all keys to the first server."""
    return 0

with DistributedLock("my-key", servers=servers, sharding_strategy=my_strategy) as lock:
    pass

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dflockd_client-1.5.0.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dflockd_client-1.5.0-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file dflockd_client-1.5.0.tar.gz.

File metadata

  • Download URL: dflockd_client-1.5.0.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for dflockd_client-1.5.0.tar.gz
Algorithm Hash digest
SHA256 7d405846b3958f32354523c7dd80b804d77593853dc7a950313c484bb1668009
MD5 c7cb4fcd7079cf71b363d1e634d5b9d4
BLAKE2b-256 ea2c8d101954048120f83670f92e7b694beff85d346debc56abc389521fd3c62

See more details on using hashes here.

File details

Details for the file dflockd_client-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: dflockd_client-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for dflockd_client-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 aeb891458935d99a1ca8adf48d85dccd77acbb5eee53c1ffd8aacc5b16f3cf39
MD5 10a0a56535b9fbdd917b85233cc3253a
BLAKE2b-256 ca9fe5cc4372781df3d36e2c3674e927df5fc37bd6d13a3ef94f086f7ed4eed4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page