Skip to main content

dflockd python client

Project description

dflockd-client

A Python client library for dflockd — a lightweight distributed lock server with FIFO ordering, automatic lease expiry, and background renewal.

Read the docs here

Installation

pip install dflockd-client

Or with uv:

uv add dflockd-client

Quick start

Async client

import asyncio
from dflockd_client.client import DistributedLock

async def main():
    async with DistributedLock("my-key", acquire_timeout_s=10) as lock:
        print(lock.token, lock.lease)
        # critical section — lease auto-renews in background

asyncio.run(main())

Sync client

from dflockd_client.sync_client import DistributedLock

with DistributedLock("my-key", acquire_timeout_s=10) as lock:
    print(lock.token, lock.lease)
    # critical section — lease auto-renews in background thread

Tip: You can also use the top-level import alias: from dflockd_client import SyncDistributedLock (or AsyncDistributedLock for async).

Manual acquire/release

Both clients support explicit acquire() / release() outside of a context manager:

from dflockd_client.sync_client import DistributedLock

lock = DistributedLock("my-key")
if lock.acquire():
    try:
        pass  # critical section
    finally:
        lock.release()

Two-phase lock acquisition

The enqueue() / wait() methods split lock acquisition into two steps, allowing you to notify an external system after joining the queue but before blocking:

from dflockd_client.sync_client import DistributedLock

lock = DistributedLock("my-key")
status = lock.enqueue()       # join queue, returns "acquired" or "queued"
notify_external_system()      # your application logic here
if lock.wait(timeout_s=10):   # block until granted (no-op if already acquired)
    try:
        pass  # critical section
    finally:
        lock.release()

Async equivalent:

lock = DistributedLock("my-key")
status = await lock.enqueue()
await notify_external_system()
if await lock.wait(timeout_s=10):
    try:
        pass  # critical section
    finally:
        await lock.release()

Parameters

Parameter Default Description
key (required) Lock name
acquire_timeout_s 10 Seconds to wait for lock acquisition
lease_ttl_s None (server default) Lease duration in seconds
servers [("127.0.0.1", 6388)] List of (host, port) tuples
sharding_strategy stable_hash_shard Callable[[str, int], int] — maps (key, num_servers) to server index
renew_ratio 0.5 Renew at lease * ratio seconds
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP
auth_token None Auth token for servers started with --auth-token. None skips auth
connect_timeout_s 10 Seconds to wait for the TCP connection to the server

Authentication

When the dflockd server is started with --auth-token, pass the token to authenticate:

from dflockd_client.sync_client import DistributedLock

with DistributedLock("my-key", auth_token="mysecret") as lock:
    print(lock.token, lock.lease)

Async equivalent:

from dflockd_client.client import DistributedLock

async with DistributedLock("my-key", auth_token="mysecret") as lock:
    print(lock.token, lock.lease)

Both DistributedLock and DistributedSemaphore accept auth_token in the async and sync clients. A PermissionError is raised if the token is invalid.

TLS

To connect to a TLS-enabled dflockd server, pass an ssl.SSLContext:

import ssl
from dflockd_client.sync_client import DistributedLock

ctx = ssl.create_default_context()  # uses system CA bundle
# or: ctx = ssl.create_default_context(cafile="/path/to/ca.pem")

with DistributedLock("my-key", ssl_context=ctx) as lock:
    print(lock.token, lock.lease)

Async equivalent:

import ssl
from dflockd_client.client import DistributedLock

ctx = ssl.create_default_context()

async with DistributedLock("my-key", ssl_context=ctx) as lock:
    print(lock.token, lock.lease)

Both DistributedLock and DistributedSemaphore accept ssl_context in the async and sync clients.

Semaphores

DistributedSemaphore allows up to N concurrent holders per key, using the same API patterns as DistributedLock:

from dflockd_client.sync_client import DistributedSemaphore

# Allow up to 3 concurrent workers on this key
with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
    print(sem.token, sem.lease)
    # critical section — up to 3 holders at once

Async equivalent:

from dflockd_client.client import DistributedSemaphore

async with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
    print(sem.token, sem.lease)

Manual acquire/release and two-phase (enqueue() / wait()) work the same as locks.

Parameters

Parameter Default Description
key (required) Semaphore name
limit (required) Maximum concurrent holders
acquire_timeout_s 10 Seconds to wait for acquisition
lease_ttl_s None (server default) Lease duration in seconds
servers [("127.0.0.1", 6388)] List of (host, port) tuples
sharding_strategy stable_hash_shard Callable[[str, int], int] — maps (key, num_servers) to server index
renew_ratio 0.5 Renew at lease * ratio seconds
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP
auth_token None Auth token for servers started with --auth-token. None skips auth
connect_timeout_s 10 Seconds to wait for the TCP connection to the server

Stats

Query server state (connections, held locks, active semaphores) using the low-level stats() function:

import asyncio
from dflockd_client.client import stats

async def main():
    reader, writer = await asyncio.open_connection("127.0.0.1", 6388)
    result = await stats(reader, writer)
    print(result)
    # {'connections': 1, 'locks': [], 'semaphores': [], 'idle_locks': [], 'idle_semaphores': []}
    writer.close()
    await writer.wait_closed()

asyncio.run(main())

Sync equivalent:

import socket
from dflockd_client.sync_client import stats

sock = socket.create_connection(("127.0.0.1", 6388))
rfile = sock.makefile("r", encoding="utf-8")
result = stats(sock, rfile)
print(result)
rfile.close()
sock.close()

Returns a dict with connections, locks, semaphores, idle_locks, and idle_semaphores.

Multi-server sharding

When running multiple dflockd instances, the client can distribute keys across servers using consistent hashing. Each key always routes to the same server.

from dflockd_client.sync_client import DistributedLock

servers = [("server1", 6388), ("server2", 6388), ("server3", 6388)]

with DistributedLock("my-key", servers=servers) as lock:
    print(lock.token, lock.lease)

The default strategy uses zlib.crc32 for stable, deterministic hashing. You can provide a custom strategy:

from dflockd_client.sync_client import DistributedLock

def my_strategy(key: str, num_servers: int) -> int:
    """Route all keys to the first server."""
    return 0

with DistributedLock("my-key", servers=servers, sharding_strategy=my_strategy) as lock:
    pass

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dflockd_client-1.8.0.tar.gz (11.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dflockd_client-1.8.0-py3-none-any.whl (16.6 kB view details)

Uploaded Python 3

File details

Details for the file dflockd_client-1.8.0.tar.gz.

File metadata

  • Download URL: dflockd_client-1.8.0.tar.gz
  • Upload date:
  • Size: 11.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dflockd_client-1.8.0.tar.gz
Algorithm Hash digest
SHA256 77fe1401249e8763e61f09e786608631e9c36c3c5d35d076e05a655c1ee6a3ee
MD5 616bfe6c3ef8622cb4276bbe327d4c5c
BLAKE2b-256 be8dbb6badb3fe77a011e272d75b4f7263616081f92dfa3748ca14bd0a99e215

See more details on using hashes here.

Provenance

The following attestation bundles were made for dflockd_client-1.8.0.tar.gz:

Publisher: publish.yml on mtingers/dflockd-client-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dflockd_client-1.8.0-py3-none-any.whl.

File metadata

  • Download URL: dflockd_client-1.8.0-py3-none-any.whl
  • Upload date:
  • Size: 16.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dflockd_client-1.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b7ab7a8bf3793b7d67e5b089f70ab385617a971c36897a43fb72d2e2da9595a0
MD5 48b8a05baf7bbb3f1f04248e47256006
BLAKE2b-256 b084e3ee7bea24c50780f53c56d08bcb96e15528f4435b75f004483bd1676277

See more details on using hashes here.

Provenance

The following attestation bundles were made for dflockd_client-1.8.0-py3-none-any.whl:

Publisher: publish.yml on mtingers/dflockd-client-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page