Skip to main content

dflockd python client

Project description

dflockd-client

A Python client library for dflockd — a lightweight distributed lock server with FIFO ordering, automatic lease expiry, and background renewal.

Read the docs here

Installation

pip install dflockd-client

Or with uv:

uv add dflockd-client

Quick start

Async client

import asyncio
from dflockd_client.client import DistributedLock

async def main():
    async with DistributedLock("my-key", acquire_timeout_s=10) as lock:
        print(lock.token, lock.lease)
        # critical section — lease auto-renews in background

asyncio.run(main())

Sync client

from dflockd_client.sync_client import DistributedLock

with DistributedLock("my-key", acquire_timeout_s=10) as lock:
    print(lock.token, lock.lease)
    # critical section — lease auto-renews in background thread

Tip: You can also use the top-level import alias: from dflockd_client import SyncDistributedLock (or AsyncDistributedLock for async).

Manual acquire/release

Both clients support explicit acquire() / release() outside of a context manager:

from dflockd_client.sync_client import DistributedLock

lock = DistributedLock("my-key")
if lock.acquire():
    try:
        pass  # critical section
    finally:
        lock.release()

Two-phase lock acquisition

The enqueue() / wait() methods split lock acquisition into two steps, allowing you to notify an external system after joining the queue but before blocking:

from dflockd_client.sync_client import DistributedLock

lock = DistributedLock("my-key")
status = lock.enqueue()       # join queue, returns "acquired" or "queued"
notify_external_system()      # your application logic here
if lock.wait(timeout_s=10):   # block until granted (no-op if already acquired)
    try:
        pass  # critical section
    finally:
        lock.release()

Async equivalent:

lock = DistributedLock("my-key")
status = await lock.enqueue()
await notify_external_system()
if await lock.wait(timeout_s=10):
    try:
        pass  # critical section
    finally:
        await lock.release()

Parameters

Parameter Default Description
key (required) Lock name
acquire_timeout_s 10 Seconds to wait for lock acquisition
lease_ttl_s None (server default) Lease duration in seconds
servers [("127.0.0.1", 6388)] List of (host, port) tuples
sharding_strategy stable_hash_shard Callable[[str, int], int] — maps (key, num_servers) to server index
renew_ratio 0.5 Renew at lease * ratio seconds
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP
auth_token None Auth token for servers started with --auth-token. None skips auth
connect_timeout_s 10 Seconds to wait for the TCP connection to the server

Authentication

When the dflockd server is started with --auth-token, pass the token to authenticate:

from dflockd_client.sync_client import DistributedLock

with DistributedLock("my-key", auth_token="mysecret") as lock:
    print(lock.token, lock.lease)

Async equivalent:

from dflockd_client.client import DistributedLock

async with DistributedLock("my-key", auth_token="mysecret") as lock:
    print(lock.token, lock.lease)

Both DistributedLock and DistributedSemaphore accept auth_token in the async and sync clients. A PermissionError is raised if the token is invalid.

TLS

To connect to a TLS-enabled dflockd server, pass an ssl.SSLContext:

import ssl
from dflockd_client.sync_client import DistributedLock

ctx = ssl.create_default_context()  # uses system CA bundle
# or: ctx = ssl.create_default_context(cafile="/path/to/ca.pem")

with DistributedLock("my-key", ssl_context=ctx) as lock:
    print(lock.token, lock.lease)

Async equivalent:

import ssl
from dflockd_client.client import DistributedLock

ctx = ssl.create_default_context()

async with DistributedLock("my-key", ssl_context=ctx) as lock:
    print(lock.token, lock.lease)

Both DistributedLock and DistributedSemaphore accept ssl_context in the async and sync clients.

Semaphores

DistributedSemaphore allows up to N concurrent holders per key, using the same API patterns as DistributedLock:

from dflockd_client.sync_client import DistributedSemaphore

# Allow up to 3 concurrent workers on this key
with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
    print(sem.token, sem.lease)
    # critical section — up to 3 holders at once

Async equivalent:

from dflockd_client.client import DistributedSemaphore

async with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
    print(sem.token, sem.lease)

Manual acquire/release and two-phase (enqueue() / wait()) work the same as locks.

Parameters

Parameter Default Description
key (required) Semaphore name
limit (required) Maximum concurrent holders
acquire_timeout_s 10 Seconds to wait for acquisition
lease_ttl_s None (server default) Lease duration in seconds
servers [("127.0.0.1", 6388)] List of (host, port) tuples
sharding_strategy stable_hash_shard Callable[[str, int], int] — maps (key, num_servers) to server index
renew_ratio 0.5 Renew at lease * ratio seconds
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP
auth_token None Auth token for servers started with --auth-token. None skips auth
connect_timeout_s 10 Seconds to wait for the TCP connection to the server

Stats

Query server state (connections, held locks, active semaphores) using the low-level stats() function:

import asyncio
from dflockd_client.client import stats

async def main():
    reader, writer = await asyncio.open_connection("127.0.0.1", 6388)
    result = await stats(reader, writer)
    print(result)
    # {'connections': 1, 'locks': [], 'semaphores': [], 'idle_locks': [], 'idle_semaphores': []}
    writer.close()
    await writer.wait_closed()

asyncio.run(main())

Sync equivalent:

import socket
from dflockd_client.sync_client import stats

sock = socket.create_connection(("127.0.0.1", 6388))
rfile = sock.makefile("r", encoding="utf-8")
result = stats(sock, rfile)
print(result)
rfile.close()
sock.close()

Returns a dict with connections, locks, semaphores, idle_locks, and idle_semaphores.

Signals (pub/sub)

SignalConn provides pub/sub messaging through named channels with NATS-style wildcard patterns.

Sync client

from dflockd_client.sync_client import SignalConn

# Listener
with SignalConn(server=("127.0.0.1", 6388)) as listener:
    listener.listen("events.>")  # wildcard: matches events.user.login, events.order.created, etc.

    # Emit from another connection
    with SignalConn(server=("127.0.0.1", 6388)) as emitter:
        emitter.emit("events.user.login", "alice")

    for sig in listener:
        print(sig.channel, sig.payload)
        break

Async client

import asyncio
from dflockd_client.client import SignalConn

async def main():
    async with SignalConn(server=("127.0.0.1", 6388)) as listener:
        await listener.listen("events.>")

        async with SignalConn(server=("127.0.0.1", 6388)) as emitter:
            await emitter.emit("events.user.login", "alice")

        async for sig in listener:
            print(sig.channel, sig.payload)
            break

asyncio.run(main())

Tip: You can also use the top-level import alias: from dflockd_client import SyncSignalConn (or AsyncSignalConn for async).

Wildcard patterns

  • * matches exactly one dot-separated token: events.*.login matches events.user.login
  • > matches one or more trailing tokens: events.> matches events.user.login, events.order.created

Queue groups

Queue groups provide load-balanced delivery — within a group, each signal is delivered to exactly one member via round-robin:

listener.listen("jobs.>", group="workers")

Parameters

Parameter Default Description
server ("127.0.0.1", 6388) (host, port) tuple
ssl_context None ssl.SSLContext for TLS connections. None uses plain TCP
auth_token None Auth token for servers started with --auth-token. None skips auth
connect_timeout_s 10 Seconds to wait for the TCP connection to the server

Multi-server sharding

When running multiple dflockd instances, the client can distribute keys across servers using consistent hashing. Each key always routes to the same server.

from dflockd_client.sync_client import DistributedLock

servers = [("server1", 6388), ("server2", 6388), ("server3", 6388)]

with DistributedLock("my-key", servers=servers) as lock:
    print(lock.token, lock.lease)

The default strategy uses zlib.crc32 for stable, deterministic hashing. You can provide a custom strategy:

from dflockd_client.sync_client import DistributedLock

def my_strategy(key: str, num_servers: int) -> int:
    """Route all keys to the first server."""
    return 0

with DistributedLock("my-key", servers=servers, sharding_strategy=my_strategy) as lock:
    pass

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dflockd_client-1.8.5.tar.gz (12.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dflockd_client-1.8.5-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file dflockd_client-1.8.5.tar.gz.

File metadata

  • Download URL: dflockd_client-1.8.5.tar.gz
  • Upload date:
  • Size: 12.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dflockd_client-1.8.5.tar.gz
Algorithm Hash digest
SHA256 2ca6409e81583cad8bd715c182f2a47c4be726bd2a80571156c6ce1ee72a0802
MD5 55e560f0efcfadbc8646e9999b142a35
BLAKE2b-256 b674c72befb79857bf93e44d23ac227bc4c114fcd792d851d21c51842c9b771f

See more details on using hashes here.

Provenance

The following attestation bundles were made for dflockd_client-1.8.5.tar.gz:

Publisher: publish.yml on mtingers/dflockd-client-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dflockd_client-1.8.5-py3-none-any.whl.

File metadata

  • Download URL: dflockd_client-1.8.5-py3-none-any.whl
  • Upload date:
  • Size: 17.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dflockd_client-1.8.5-py3-none-any.whl
Algorithm Hash digest
SHA256 116b9f214183cfc18275e5f7e8accfc4e0d87f5c58babe669ae0590074e9eea7
MD5 91bb39d5463fdbba0913693ae4343a3e
BLAKE2b-256 1bf9cb7eb25497b687cf4de24fe9ee383fbfcab31d813560dd5e953cfe6b05b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for dflockd_client-1.8.5-py3-none-any.whl:

Publisher: publish.yml on mtingers/dflockd-client-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page