Barkhausen stability monitor for AI agent loops. Real-time loop-gain (Aβ) monitoring with five named threshold bands, best-so-far rollback, and ETA prediction.
Reason this release was yanked:
Stale build with incorrect README metadata; superseded by 0.1.1
Project description
LoopGain
Barkhausen stability monitor for AI agent verify-revise loops.
Replace max_iterations=5 with a real-time loop-gain (Aβ) monitor that knows whether your agent loop is converging, stalling, oscillating, or diverging — and what to do in each case.
Why
Production agent loops universally use max_iterations=N as their termination policy. It's the embarrassing default of agentic AI: you either waste compute (loop stops too late) or ship bad output (loop stops too early). LoopGain replaces it with a control-theoretic stability monitor based on the Barkhausen criterion — a foundational result from electrical-engineering feedback-oscillator analysis (1921).
The math is foundational. The product is the threshold bands, the best-so-far buffer, the ETA prediction, and the clean Python API.
Install
pip install loopgain
Pure Python, no dependencies, supports Python 3.10+.
Usage
Three lines of code wrap any verify-revise loop:
from loopgain import LoopGain
lg = LoopGain(target_error=0.1)
while lg.should_continue():
errors = verifier.verify(output)
lg.observe(errors, output=output)
output = reviser.revise(output, errors)
result = lg.result
print(result.outcome) # "converged" | "oscillating" | "diverged" | "max_iterations"
print(result.best_output) # the lowest-error iteration's output
print(result.iterations_used)
print(result.gain_margin) # 1 / max(Aβ_smooth)
print(result.savings_vs_fixed_cap)
observe() accepts either a numeric error magnitude or any sequence (whose length becomes the magnitude). Pass output=... to enable the best-so-far buffer.
How it works
LoopGain measures empirical loop gain Aβ = E(n) / E(n-1) at every iteration. It smooths Aβ with a configurable EMA and classifies the result into five named bands:
Aβ_smooth range |
State | Action |
|---|---|---|
< 0.3 |
FAST_CONVERGE |
Continue, predict ETA |
0.3 ≤ Aβ < 0.85 |
CONVERGING |
Continue, watch for upward drift |
0.85 ≤ Aβ < 0.95 |
STALLING |
Warn — diminishing returns |
0.95 ≤ Aβ ≤ 1.05 |
OSCILLATING |
Break — return best-so-far |
> 1.05 |
DIVERGING |
Abort — roll back to best-so-far |
Plus a short-circuit: if observed error drops at or below target_error, the loop stops immediately with state TARGET_MET.
The ±0.05 noise band around Aβ=1 absorbs stochastic jitter from agent outputs without triggering false-positive aborts. The 0.85 STALLING boundary is an early warning — by the time Aβ crosses 1.0, you've already wasted iterations.
These threshold defaults work well for typical agent loops out of the box. Tune them per domain (via the ThresholdBands argument) once you have production traces.
ETA prediction
When the loop is converging (Aβ_smooth < 1), LoopGain produces a closed-form prediction of iterations remaining:
n_remaining = log(E_target / E_current) / log(Aβ_smooth)
Available as lg.eta mid-loop. Returns None when the prediction isn't well-defined (no Aβ yet, target zero, or non-converging gain).
Best-so-far rollback
LoopGain keeps a buffer of all observed outputs paired with their error scores. On termination it returns argmin(error), not the last iteration:
| Terminal state | Returned output |
|---|---|
TARGET_MET |
Current output (by definition, the best) |
OSCILLATING |
Lowest-error iteration in the buffer |
DIVERGING |
Lowest-error iteration (which is not the last one) |
This transforms divergence detection from "abort with garbage" into "abort with the best you've seen so far" — a free quality floor.
API reference
LoopGain(target_error=0.0, max_iterations=None, thresholds=None, smoothing_window=3, assumed_fixed_cap=10)
Construct the monitor.
target_error— Stop when an observed error drops at or below this. Default0.0means "never short-circuit on target met."max_iterations— Hard safety cap. DefaultNone(rely on stability detection). Recommended ~20–50 for production.thresholds— CustomThresholdBandsif defaults don't fit your domain.smoothing_window— EMA window for the smoothed Aβ. Default 3.assumed_fixed_cap— Used to computesavings_vs_fixed_cap. Default 10.
lg.observe(errors, output=None) -> str
Record this iteration's errors and optional output. Returns the current state name. errors accepts a number (used directly) or any sequence (length used as magnitude).
lg.should_continue() -> bool
Returns False once a terminal state fires.
lg.state -> str
Current state name. One of INIT, FAST_CONVERGE, CONVERGING, STALLING, OSCILLATING, DIVERGING, TARGET_MET, MAX_ITERATIONS.
lg.eta -> int | None
Predicted iterations to reach target. None when not well-defined.
lg.gain_margin -> float | None
1 / max(Aβ_smooth). > 1 means stable headroom across the entire run.
lg.result -> LoopGainResult
Terminal result with outcome, iterations_used, best_index, best_output, best_error, convergence_profile, error_history, gain_margin, savings_vs_fixed_cap. Safe to call mid-loop.
lg.send_telemetry(endpoint, token, workload_id=None, timeout=2.0) -> bool
Opt-in. Send a single anonymized telemetry POST after the loop terminates. Best-effort — never raises, returns True on 2xx, False otherwise.
lg.send_telemetry(
endpoint="https://telemetry.loopgain.ai/v1/aggregate", # or self-hosted
token="your-token", # bearer auth
workload_id="my-rag-pipeline", # opaque label
)
What is sent: state transitions, Aβ summary (min/max/median), gain margin, rollback flag, iterations used, savings, library version, optional opaque workload_id, threshold config, hour-bucketed timestamp.
What is NEVER sent: prompts, completions, error contents, output buffer, individual Aβ values, or any customer identity beyond the bearer token. Privacy contract is enforced by the payload-shape unit tests in tests/test_telemetry.py.
The Cascade-Systems-hosted endpoint at telemetry.loopgain.ai is one acceptable destination; the receiver code is open-source so customers can self-host to keep telemetry fully under their control.
Status
v0.1.0 — initial public release. Core library shipped. Framework adapters (LangGraph, CrewAI, AutoGen, Vesper) and the cloud-aggregator dashboard come in v0.2+. The math and the API surface are stable.
This is alpha software. The API may break before 1.0 if production usage surfaces design issues; pin the version.
License
Background
LoopGain applies the Barkhausen stability criterion (Heinrich Barkhausen, 1921 — the foundational result on when feedback amplifiers oscillate) to AI agent feedback loops. The criterion was originally a way to predict whether an electronic oscillator would sustain oscillation; it turns out to map cleanly onto any feedback loop you can attach an error signal to.
The cleanest summary: a verify-revise loop is a feedback system with measurable error magnitude. The ratio E(n) / E(n-1) is its empirical loop gain. The Barkhausen result tells you that loop gain less than 1 converges, equal to 1 oscillates, greater than 1 diverges. LoopGain operationalizes this: classifies the loop's current band, decides what to do, and tells you when you'll converge.
See loopgain.ai for the longer write-up.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file loopgain-0.1.0.tar.gz.
File metadata
- Download URL: loopgain-0.1.0.tar.gz
- Upload date:
- Size: 26.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7d5e49063764fa7a7084283a0be53de92b557b6f545060bedbc8b17da5ec8fb
|
|
| MD5 |
6481ae9dfe7bf687d77a2bd8024cb230
|
|
| BLAKE2b-256 |
68191599ee27220b6bc3b5296a465a7d47e1f07556ab33dc1c19d2f0995b115c
|
File details
Details for the file loopgain-0.1.0-py3-none-any.whl.
File metadata
- Download URL: loopgain-0.1.0-py3-none-any.whl
- Upload date:
- Size: 16.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a7d01861c0d940c7860c61d202c13cbcf79e89945033383ad2d7a0d29b9d274
|
|
| MD5 |
7f84da67f0ae7ad096aeba3ad9d71047
|
|
| BLAKE2b-256 |
8ebf28ee979a99f102bd6afca503c153af9fe3172b3d38cac6e7ab14517f692f
|