Skip to main content

pyinfra connector targeting LXD containers via the LXD HTTPS API (no SSH hop, no CLI subprocess, no websockets)

Project description

pyinfra-lxd-api-connector

A pyinfra connector that targets LXD containers via the LXD HTTPS API directly — no SSH hop, no paramiko, no lxc CLI subprocess, no websockets. One kept-alive HTTPS connection per host, exec via the container_exec_recording API extension, file transfers via the native files API.

Why?

The obvious way to drive pyinfra against LXD is to shell out to the lxc CLI for every command. That approach pays a per-command cost of ~6–7 fresh TCP+TLS connections (capabilities probe + events websocket + exec POST + 4× stdio websockets + operation poll) — measured at ~870 ms per command over Tailscale from a remote laptop.

Talking to the API directly with record-output: true mode collapses all of that to a single kept-alive HTTPS connection with zero websockets. Measured at ~150 ms per run_shell_command from the same vantage point — ~5–6× faster, and within the same order of magnitude as warm SSH-multiplex.

Tracks pyinfra issue #677. Per pyinfra's contributing guide, connectors live as separate packages rather than in the pyinfra core repo.

Performance

Per-call latency over Tailscale from a remote laptop (~27 ms RTT to the cluster), measured via smoke_test.py against a real container:

Operation Wall time
connect() (cold TLS + capability probe + container check) ~335 ms
run_shell_command (warm, kept-alive) ~130–260 ms
put_file (small payload) ~80 ms
get_file (small payload) ~30 ms

For comparison, an lxc exec-based connector pays ~870 ms per run_shell_command from the same vantage point. From a node inside the cluster the difference doesn't matter; from a laptop driving deploys over a WAN it dominates wall time.

Install

uv tool install pyinfra --with pyinfra-lxd-api-connector

Usage

Prereq: an lxc remote configured locally:

lxc remote add mycluster https://your-cluster:8443 --token <token>
lxc list mycluster:        # verify

The connector reads the standard LXD client config at ~/.config/lxc/:

  • config.yml — remote URL
  • client.crt + client.key — mTLS client identity
  • servercerts/<remote>.crt — pinned server cert

Inventory:

hosts = [
    "@lxd_api/mycluster:php01",       # explicit remote
    "@lxd_api/web1",                   # uses default-remote from lxc config
    "@lxd_api/some-other-cluster:web1",
]

A bare @lxd_api/<container> resolves the remote via the default-remote field in ~/.config/lxc/config.yml — the same field lxc itself consults when called without a remote qualifier. Switch the default with lxc remote switch <name>. If no default is set, the connector raises an InventoryError pointing you at the qualified form.

Requirements

  • LXD server with the container_exec_recording API extension (LXD 5.0+).
  • Local LXD client config at ~/.config/lxc/. lxc remote add sets all of this up.

Status

Alpha. In production use against a 32-container LXD cluster since 2026-04-28. Feedback / bug reports welcome.

Known limitations

  • No interactive / PTY support — the connector raises NotImplementedError if _get_pty=True. pyinfra never needs PTY for facts/operations, so this is fine in practice; if you need an interactive shell, use lxc shell directly.
  • Per-command stdout/stderr is buffered, not streamedrecord-output mode means output arrives at the end of the command. For pyinfra's typical workload (facts and one-shot operations) this is invisible; for long-running commands you won't see live progress.
  • Run-time HTTP calls (exec, file transfer) don't retry — only the two connect() GETs retry on transient errors. Mid-run network blips on run_shell_command / put_file / get_file will fail the operation. Per-call retry there is operation-dependent (e.g. POST /exec is unsafe to blindly retry once it's reached the server). Tracked in #2.

AI assistance

Per the pyinfra AI usage policy, disclosing how this package was authored:

The initial draft of pyinfra_lxd_api_connector.py was generated by Claude (Anthropic) in collaboration with the maintainer (Christian Rishøj). Specifically:

  • Christian identified the original problem — a silent SFTP-truncation bug in an earlier lxc exec-based driver — and ran the empirical analysis showing that the LXD record-output: true API path was the right fast alternative.
  • Claude drafted the connector module against pyinfra's BaseConnector interface.
  • Christian reviewed every line, integrated and ran it against a 32-container production cluster, and iterated through several rounds of correctness, latency, and ergonomics fixes.
  • All subsequent maintenance is human-driven.

The code in this repository is fully understood and reviewed by the maintainer; AI assistance is a drafting tool, not a substitute for human judgment.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyinfra_lxd_api_connector-0.1.0.tar.gz (14.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyinfra_lxd_api_connector-0.1.0-py3-none-any.whl (10.4 kB view details)

Uploaded Python 3

File details

Details for the file pyinfra_lxd_api_connector-0.1.0.tar.gz.

File metadata

File hashes

Hashes for pyinfra_lxd_api_connector-0.1.0.tar.gz
Algorithm Hash digest
SHA256 28ec08c2b94d1a361f20e588ea895077945cfcea6a476e1ebe1788e1882de9f8
MD5 2ce1d9c9f6389df8be661b00c5af64e0
BLAKE2b-256 3ea768cf17fc574743b7a6885606855c079b866e2d6b0a5f0b7b8e9b44681351

See more details on using hashes here.

File details

Details for the file pyinfra_lxd_api_connector-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pyinfra_lxd_api_connector-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d0db4c5cb618a883f0c0d1337a7ac007e0d82565324fd70ee7687d38f293135e
MD5 fd2b7e337f8dd67a989c3e22780b6853
BLAKE2b-256 88bafb3aa2dd604e8e5ae4bd4d341b202a595012e82c4bc494b11027088c3853

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page