Skip to main content

MVP implementation of architectural translucency for Docker/Kubernetes replication layer analysis

Project description

presidio-hardened-arch-translucency

PyPI version Python GitHub release License: MIT

v0.5.0 — Architectural Translucency Analyzer for Docker & Kubernetes

Architectural translucency (Stantchev, ~2005) is the ability to monitor and control non-functional properties — especially performance — architecture-wide in a cross-layered way. The core insight: the same measure (replication) has different implications on throughput ω(δ) and response time when applied at different layers.

This CLI tool (pat) helps you choose the replication layer that gives the highest performance gain with the lowest overhead for your workload.


Replication Layers (Docker/Kubernetes)

Layer Description Fixed Overhead Coordination Cost
container New Docker container (process-level isolation) 2% Low
pod Kubernetes Pod (shared network namespace) 5% Moderate
deployment Kubernetes Deployment/ReplicaSet 10% High
node Cluster node (full VM/bare-metal) 18% Highest

Installation

pip install presidio-hardened-arch-translucency

Or with uv:

uv pip install presidio-hardened-arch-translucency

Quick Start

# Analyze a 500 req/s workload with 80ms avg latency, currently at container level
pat analyze --requests-per-second 500 --avg-latency-ms 80 --current-layer container

Output:

╭──────────── Presidio Architectural Translucency — Recommendation ────────────╮
│ Recommended layer:  container                                                 │
│ Optimal replicas:   4                                                         │
│ Throughput gain:    +45.2%                                                    │
│ Response-time Δ:    -38.1%                                                    │
│ Est. throughput:    500 req/s                                                  │
│ Est. response time: 49.4 ms                                                   │
│                                                                               │
│ New Docker container (process-level isolation, shared kernel)                 │
╰───────────────────────────────────────────────────────────────────────────────╯

Baseline: 714 req/s @ 80.0 ms  (current layer: container)

Show all layers

pat analyze --requests-per-second 500 --avg-latency-ms 80 \
    --current-layer container --show-all
Layer Replicas Throughput Δ Throughput Response Time Δ RT Recommended
container 4 500 +45.2% 49.4 ms -38.1%
pod 3 500 +42.0% 55.2 ms -31.0%
deployment 2 500 +38.1% 68.3 ms -14.6%
node 1 357 0.0% 80.0 ms 0.0%

Dynamic Scaling Analysis (v0.3.0)

pat what-if — HPA Lag Model

Projects the performance trough that occurs between a load spike and the moment new Kubernetes pods become Ready. Shows throughput, latency, p99, and missed requests during the HPA scale-out window.

pat what-if \
  --current-rps 50 --spike-rps 200 \
  --avg-latency-ms 80 --current-layer container \
  --output hpa-event.png

Three stacked panels are saved to hpa-event.png:

  • Throughput (req/s) — actual served vs demand, with trough annotation
  • Avg latency (ms) — how response time degrades during the trough
  • p99 latency (ms) — tail behaviour before and after pods are Ready

Optional overrides: --hpa-poll-s (default 15 s), --pod-startup-s (default 30 s), --cold-start-s (default 0 s), --replicas-before, --replicas-after.


pat slo — SLO Compliance Check

Checks whether a p99 latency SLO is met in steady-state and during an HPA trough across all four replication layers.

pat slo \
  --requests-per-second 50 \
  --avg-latency-ms 80 \
  --p99-target-ms 500 \
  --spike-multiplier 3.0

Output table shows steady p99, trough p99, and SLO verdict per layer. The recommendation panel advises the minimum HPA minReplicas needed to eliminate the trough breach.


Cost-Aware Analysis (v0.4.0 / v0.5.0)

pat cost — Performance-Per-Dollar Ranking

Cross-layer cost analysis showing hourly cost, cost-per-request, and ROI score for every replication layer.

pat cost \
  --requests-per-second 500 \
  --avg-latency-ms 80 \
  --current-layer container \
  --cost-per-container-hour 0.02 \
  --cost-per-pod-hour 0.05 \
  --cost-per-deployment-hour 0.10 \
  --cost-per-node-hour 0.50
Layer Replicas Δ Throughput Δ RT Cost/hr Cost/req ROI score Best ROI
container 4 +45.2% -38.1% $0.0800 $0.000044 1 027
pod 3 +42.0% -31.0% $0.1500 $0.000083 504
deployment 2 +38.1% -15% $0.2000 $0.000111 343
node 1 0.0% 0.0% $0.5000 $0.000278 0

ROI score = throughput-gain-% / cost-per-request (higher = better performance-per-dollar).

pat analyze — Cost columns

Add --cost-per-replica-hour to include cost columns in --show-all output:

pat analyze --requests-per-second 500 --avg-latency-ms 80 \
    --current-layer container --show-all --cost-per-replica-hour 0.02

pat what-if — Trough revenue impact

Add --cost-per-request to see the estimated revenue cost of the HPA trough:

pat what-if --current-rps 50 --spike-rps 200 --avg-latency-ms 80 \
    --current-layer container --cost-per-request 0.001

Output includes:

  Missed reqs   ~1 350
  Trough cost   ~$1.35 revenue impact

pat slo — Min-cost layer that meets SLO

pat slo now shows a Cost/hr column and identifies the cheapest layer that satisfies your p99 target.


Cloud Billing Integration (v0.5.0)

Replaces manual --cost-per-*-hour flags with live AWS on-demand prices fetched from the public AWS Pricing API (no credentials required). Results are cached locally for 24 hours at ~/.pat/pricing-cache.json.

pat cost --cloud aws — EC2 instance pricing

pat cost \
  --requests-per-second 500 \
  --avg-latency-ms 80 \
  --current-layer container \
  --cloud aws \
  --region us-east-1 \
  --instance-type m5.large

Per-layer costs are derived from the full node price using packing ratios (16 containers/node, 8 pods/node).

pat cost --cloud aws --fargate — Fargate task pricing

pat cost \
  --requests-per-second 500 \
  --avg-latency-ms 80 \
  --current-layer container \
  --cloud aws \
  --region us-east-1 \
  --fargate \
  --vcpu 0.5 \
  --memory-gb 1.0

Container cost = 25 % of task price; pod = full task; deployment = 4×; node = 8×.

pat demo --cloud aws — Demo with live pricing

Pass the same --cloud aws flags to pat demo to replace the default --cost-per-container-hour with live AWS prices:

pat demo --cloud aws --region us-east-1 --instance-type m5.large

Cache control

Flag Effect
(default) Use cache if < 24 h old
--no-cache Force a fresh API fetch

Live Demonstrator

pat demo spins up real Docker containers and measures throughput, latency, and CPU across three replication variants, then outputs:

  1. A results table and PNG comparison chart
  2. An HPA Lag Projection — what happens if load spikes 3× (v0.3.0)
  3. A Cost Analysis panel — cost/req per variant and best-ROI layer (v0.5.0)

Requirements: Docker daemon running locally.

# Install with demo extras
pip install "presidio-hardened-arch-translucency[demo]"

# Run the demo (defaults: 4 replicas, 40 requests, 8 concurrent threads)
pat demo

# Custom run with cost override
pat demo --replicas 6 --requests 80 --concurrency 12 \
    --cost-per-container-hour 0.05 --output results.png

Variants compared:

Variant Description
1 — Single container Baseline: one container handles all traffic
2 — N containers (round-robin) Manual container-level replication, client-side LB
3 — N workers + nginx Simulated Kubernetes Deployment with nginx reverse proxy

Example output:

╭───── Architectural Translucency — Measured Results ──────╮
│ Variant                    Workers  Throughput  Avg Lat   │
│ 1 — Single container            1        8.2    612 ms    │
│ 2 — 4 containers (round-robin)  4       28.7    167 ms ✓  │
│ 3 — nginx LB (4 workers)        5       22.4    213 ms    │
╰──────────────────────────────────────────────────────────╯

Architectural Translucency Insight:
  Manual container replication minimises coordination overhead…

╭──────────── HPA Lag Projection (if load spikes 3×) ─────────────╮
│ TROUGH  (0 s – 45 s)                                             │
│   Throughput    8.2 req/s  (9 % of spike demand)                 │
│   p99 latency   4,896 ms                                         │
│   Missed reqs   ~3,321                                           │
│ STEADY STATE  (after 45 s — 3 replicas)                          │
│   Throughput    24.6 req/s                                        │
│   p99 latency   1,102 ms                                         │
│ → Set HPA minReplicas = 3 to eliminate the trough.               │
╰──────────────────────────────────────────────────────────────────╯

╭────────────────── v0.5.0 Cost Analysis ──────────────────────────╮
│ Best measured variant:  2 — 4 containers (round-robin)           │
│   Cost/req  $0.000077  ·  Cost/hr  $0.0800                       │
│ Analytical best-ROI layer:  container                            │
│   Replicas  4  ·  Throughput gain  +45.2%                        │
│   Cost/req  $0.000044  ·  Cost/hr  $0.0800  ·  ROI score  1027   │
╰──────────────────────────────────────────────────────────────────╯

Two PNG files are saved: demo-results.png (bar chart) and demo-results-hpa.png (3-panel HPA time-series).


Security — Presidio Hardening

This toolkit ships with mandatory Presidio security extensions:

Feature Description
Input sanitization All workload parameters are bounds-checked and type-validated
Secure logging Recommendations logged without sensitive data
CVE/dependency audit pip-audit check on every run (--skip-audit to disable)
Security event logging "Presidio architectural-translucency recommendation applied" emitted
Output sanitization User-supplied values are never echoed raw into output
Dependabot Automated dependency updates via .github/dependabot.yml
CodeQL Static analysis via .github/workflows/codeql.yml

CLI Reference

Usage: pat [OPTIONS] COMMAND [ARGS]...

Options:
  -V, --version         Show version and exit.
  -v, --verbose         Enable debug logging.
  --skip-audit          Skip the on-run CVE dependency audit.
  --help                Show this message and exit.

Commands:
  analyze   Analyze workload and recommend the optimal replication layer.

pat analyze Options:
  -r, --requests-per-second FLOAT   Observed workload in req/s  [required]
  -l, --avg-latency-ms FLOAT        Current average latency in ms  [required]
  -c, --current-layer TEXT          Current layer (container|pod|deployment|node)  [required]
  --show-all                        Show all layers in a comparison table

Theory: Architectural Translucency Model

The model is based on the replication performance equations from Stantchev's work:

Intensity after replication:

ι(δ) = rps/δ  +  α·rps  +  β·rps·ln(δ)

Throughput:

ω(δ) = min(base_capacity · δ · efficiency(δ), rps)
efficiency(δ) = 1 - α - β·ln(δ)

Response time (M/M/δ approximation):

RT(δ) = avg_latency / (1 - ρ)  +  coordination_overhead
ρ = ι(δ) / base_capacity

Where α (fixed overhead) and β (coordination cost) are layer-specific parameters calibrated for Docker/Kubernetes realities.

The cross-layer recommendation maximises ω(δ) gain while penalising response-time degradation — the central principle of architectural translucency.


Development

uv venv .venv && source .venv/bin/activate
uv pip install -e ".[dev]"

# Format + lint
ruff format . && ruff check . --fix

# Tests with coverage
pytest

Roadmap

Version Theme
v0.1.0 MVP — layer analysis & recommendation
v0.2.0 Multi-Python CI hardening
v0.3.0 HPA lag model (pat what-if, pat slo)
v0.4.0 Cost-aware replication analysis (pat cost)
v0.5.0 Cloud billing integration — AWS on-demand pricing
v0.6.0 Cloud billing — reserved/spot + GCP + Azure
v0.7.0 Autoresearch — pat demo observation + simple moving average predictions
v0.8.0 Autoresearch — Prometheus integration + ARIMA time-series model

Full deliberation and feature details: PRESIDIO-REQ.md


License

MIT — see LICENSE.

References

  • V. Stantchev, "Effects of Replication on Web Service Performance in WebSphere," Technical Report, ICSI — International Computer Science Institute, Berkeley, CA, USA.
  • V. Stantchev, C. Schröpfer, "Negotiating and Enforcing QoS and SLAs in Grid and Cloud Computing," in Advances in Grid and Pervasive Computing (GPC 2009), Lecture Notes in Computer Science, vol. 5529, Springer, 2009.
  • V. Stantchev, M. Malek, "Architectural translucency in service-oriented architectures," IEE Proceedings — Software, vol. 153, no. 1, pp. 31–37, 2006. DOI: 10.1049/ip-sen:20050017

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

presidio_hardened_arch_translucency-0.5.0.tar.gz (56.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file presidio_hardened_arch_translucency-0.5.0.tar.gz.

File metadata

  • Download URL: presidio_hardened_arch_translucency-0.5.0.tar.gz
  • Upload date:
  • Size: 56.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for presidio_hardened_arch_translucency-0.5.0.tar.gz
Algorithm Hash digest
SHA256 7593066fcdded331d684f566ca59a160ffdb6fce2dc714c0a611440a76adcd31
MD5 515d04228312925a6553f387be7e3e74
BLAKE2b-256 0a0f05555054c1e353039638e75d35e0d6f7fc168e303ce46ba73ae091a8eeb8

See more details on using hashes here.

File details

Details for the file presidio_hardened_arch_translucency-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: presidio_hardened_arch_translucency-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 41.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for presidio_hardened_arch_translucency-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0e47392b0ac19fef9526cebe964350f63af44208839dca8abf1cdc3e352a6905
MD5 97dfe9c3cfe1d15b56fb2a6b3c1a5ea7
BLAKE2b-256 88d8c7c3f44253991b4c2f549c985466fd89c15e5d5a1fbfed090bcdcb46a697

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page