Python client for alphainfo.io — Structure-aware analysis for any time series
Project description
alphainfo
Python client for the alphainfo Structural Intelligence API.
Detect structural regime changes in any time series — biomedical signals, financial markets, energy grids, seismic data, IoT sensors, network traffic, ML drift. One API, no model training, no per-domain tuning. Every analysis ships with an audit trail.
▶ Try it in Google Colab (2 min, no install) — fetches real SPY data, detects the March 2020 regime change, visualizes the result.
30-second try
Step 1 — Get a free API key (50 analyses/month, no credit card).
Step 2 — Install and analyze a signal:
pip install alphainfo
from alphainfo import AlphaInfo
client = AlphaInfo(api_key="ai_...") # your free key
# Any time series — here, a toy sine with a regime change
import math
signal = [math.sin(i/10) for i in range(200)] + [math.sin(i/10) * 3 for i in range(200)]
result = client.analyze(signal=signal, sampling_rate=100.0)
print(result.confidence_band) # 'stable' | 'transition' | 'unstable'
print(result.structural_score) # 0.0 (changed) → 1.0 (preserved)
print(result.analysis_id) # UUID for audit replay
That's it. You just ran a structural analysis. 🚀
What to try next: client.fingerprint() for a 5D similarity vector, client.analyze_batch() for up to 100 signals in one call, or client.guide() for the full encoding guide (no key needed).
Installation
pip install alphainfo
# Optional: enable HTTP/2 for better throughput on concurrent calls
pip install alphainfo[http2]
Requires Python 3.8+. Core dependency: httpx.
Full examples
1. Get your API key
alphainfo.io/register — free tier: 50 analyses/month, no credit card required. Starter paid plans from $49/mo.
2. Analyze a signal
from alphainfo import AlphaInfo
client = AlphaInfo(api_key="ai_your_key")
# Any time series: ECG, market prices, sensor readings, power grid...
result = client.analyze(
signal=[1.2, 1.3, 1.1, 2.8, 3.1, 3.0, ...],
sampling_rate=250.0,
domain="biomedical",
)
if result.change_detected:
print(f"Regime change detected! Band: {result.confidence_band}")
print(f"Structural score: {result.structural_score:.3f}")
print(f"Audit ID: {result.analysis_id}")
3. Structural fingerprint (fast path)
# Extract the 5D structural fingerprint — skips semantic + multiscale for speed
fp = client.fingerprint(signal=data, sampling_rate=250.0, domain="biomedical")
print(fp.structural_score) # 0.0 to 1.0
print(fp.confidence_band) # 'stable', 'transition', 'unstable'
# Always guard before indexing — the fingerprint is None for signals
# the engine can't decompose (too short, constant, etc).
if fp.is_complete:
print(fp.vector) # 5D list of floats, each in [0, 1]
else:
print(f"unavailable: {fp.fingerprint_reason}")
# Use .vector for nearest-neighbor search / ANN indexing — skip incomplete ones
from sklearn.neighbors import NearestNeighbors
vectors = [fp.vector for s in signal_corpus
if (fp := client.fingerprint(s, 250.0)).is_complete]
nn = NearestNeighbors(n_neighbors=5).fit(vectors)
Minimum signal length for a complete fingerprint:
| Case | Minimum samples | Constant |
|---|---|---|
| No baseline | 192 | alphainfo.MIN_FINGERPRINT_SAMPLES |
| With baseline | 50 | alphainfo.MIN_FINGERPRINT_SAMPLES_WITH_BASELINE |
Below those thresholds, fingerprint_available comes back False with
fingerprint_reason="signal_too_short", and the SDK emits a UserWarning
at call time. For shorter inputs, use client.analyze() — it still
returns a structural_score and confidence_band, just not the 5D vector.
See examples/fingerprint_handling.py for
a fuller pattern (falls back to the semantic layer when a fingerprint is
unavailable).
4. Batch analysis
# Analyze up to 100 signals in one call
batch = client.analyze_batch(
signals=[signal_1, signal_2, signal_3],
sampling_rate=1000.0,
domain="sensors",
)
for item in batch.results:
if item.success:
print(f"Signal {item.index}: {item.confidence_band} ({item.structural_score:.3f})")
else:
print(f"Signal {item.index}: error — {item.error}")
5. Semantic layer (severity, trend, alerts)
result = client.analyze(
signal=data, sampling_rate=1.0,
include_semantic=True,
baseline=calm_period,
)
if result.semantic:
print(result.semantic.alert_level) # 'normal', 'attention', 'alert', 'critical'
print(result.semantic.severity) # 'none', 'low', 'moderate', 'high', 'critical'
print(result.semantic.severity_score) # 0-100 (higher = more severe)
print(result.semantic.trend) # 'stable', 'diverging', 'monitoring'
print(result.semantic.summary) # "⚠️ Structural divergence detected (severity: high)"
print(result.semantic.recommended_action) # 'log_only', 'monitor', 'human_review', 'immediate_human_review'
# Short signal warning (< 100 samples)
if result.warning:
print(result.warning) # "Signal has only 30 samples..."
Severity thresholds:
| severity | severity_score | Meaning |
|---|---|---|
none |
0-15 | No structural degradation |
low |
16-35 | Minor deviation, monitor |
moderate |
36-65 | Notable change, investigate |
high |
66-85 | Significant regime shift |
critical |
86-100 | Severe structural breakdown |
6. Multi-channel (vector) analysis with per-channel baselines
# Multi-lead ECG, multi-axis accelerometer, cross-asset finance...
vector = client.analyze_vector(
channels={
"lead_I": ecg_lead_1,
"lead_II": ecg_lead_2,
"lead_III": ecg_lead_3,
},
sampling_rate=360.0,
domain="biomedical",
)
print(f"Aggregated score: {vector.structural_score:.3f}")
print(f"Composite band: {vector.confidence_band}")
for name, ch in vector.channels.items():
print(f" {name}: {ch.confidence_band} (score={ch.structural_score:.3f})")
# With per-channel baselines (e.g. calm period reference)
vector = client.analyze_vector(
channels={"SPY": spy_data, "VIX": vix_data, "GLD": gld_data},
sampling_rate=1.0,
baselines={"SPY": spy_calm, "VIX": vix_calm, "GLD": gld_calm},
)
7. Audit trail
# Replay any past analysis
replay = client.audit_replay("550e8400-e29b-41d4-a716-446655440000")
print(f"Original score: {replay.output['structural_score']}")
# List recent analyses
history = client.audit_list(limit=10)
for entry in history:
print(f"{entry.analysis_id} — {entry.structural_score}")
8. API guide (discoverability)
# Fetch the full encoding guide — endpoints, patterns, tips, debugging
guide = client.guide()
print(guide["version"]) # "1.1"
print(list(guide.keys())) # all available sections
# Common mistakes
for m in guide["common_mistakes"]:
print(f"- {m['mistake']}: {m['fix']}")
# Which endpoint to use
for name, info in guide["endpoints"].items():
print(f"{name}: {info.get('path', '')} — {info.get('when', '')}")
9. Version and compatibility
info = client.version()
print(info["api_version"]) # "2.3.0"
print(info["sdk_compat"]["recommended_version"]) # "1.5.21"
print(info["features"]) # dict of supported features
print(info["limits"]["max_batch_size"]) # 100
Async Support
from alphainfo import AsyncAlphaInfo
async with AsyncAlphaInfo(api_key="ai_your_key") as client:
result = await client.analyze(signal=data, sampling_rate=250.0)
fp = await client.fingerprint(signal=data, sampling_rate=250.0)
All methods available on AlphaInfo are also available on AsyncAlphaInfo.
Error Handling
from alphainfo import AlphaInfo, AuthError, RateLimitError, ValidationError
client = AlphaInfo(api_key="ai_your_key")
try:
result = client.analyze(signal=data, sampling_rate=250.0)
except AuthError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except ValidationError as e:
print(f"Invalid input: {e.message}")
Exception hierarchy:
| Exception | HTTP Code | When |
|---|---|---|
AuthError |
401 | Invalid or missing API key |
ValidationError |
400, 413 | Bad input or signal too large |
RateLimitError |
429 | Quota or concurrency limit exceeded |
NotFoundError |
404 | Analysis ID not found (audit) |
APIError |
5xx | Server error |
TimeoutError |
— | Request timed out after retries |
NetworkError |
— | Connection failed |
All inherit from AlphaInfoError.
Configuration
client = AlphaInfo(
api_key="ai_your_key",
base_url="https://www.alphainfo.io", # default
timeout=30.0, # seconds (default)
max_retries=3, # automatic retry on transient errors
retry_base_delay=1.0, # initial backoff delay (seconds)
retry_max_delay=32.0, # max delay between retries (seconds)
http2=None, # auto-detect (True if h2 installed)
)
The client automatically retries on:
- Network timeouts and connection errors
- HTTP 429 (rate limits) — respects
Retry-Afterheader - HTTP 5xx (server errors)
Non-retryable errors (401, 400, 404) are raised immediately.
Backoff is exponential: retry_base_delay * 2^attempt, capped at retry_max_delay.
Rate Limit Info
result = client.analyze(signal=data, sampling_rate=250.0)
info = client.rate_limit_info
if info:
print(f"Remaining: {info.remaining}/{info.limit}")
Signal Size Guide
| Samples | Behavior | Recommendation |
|---|---|---|
| < 10 | Rejected (422) | Hard minimum |
| 10-49 | Returns 0.5 + warning | Too short for multiscale |
| 50-99 | Returns 0.5 + warning | Limited confidence |
| 100-199 | Variable scores | Detection active, less reliable |
| 200-500 | Reliable scores | Recommended range |
| 500+ | Reliable, may dilute point events | Use windowing for point detection |
Note: sampling_rate controls multiscale window sizing but does not change scores for a given signal. For daily financial data use sampling_rate=1.0; for ECG at 250Hz use sampling_rate=250.0.
Amplitude invariance — what scale changes mean
The engine measures internal structure, not absolute scale. The behavior depends on whether you provide a baseline:
| Mode | Behavior | Why |
|---|---|---|
| No baseline (single signal) | Largely amplitude-invariant. analyze(amp×0.5) and analyze(amp×2.0) of the same shape return nearly identical structural_score. |
Curvatures normalize against the signal's own statistics — uniform gain doesn't change shape. |
| With baseline (observation vs baseline) | Amplitude DOES register. Comparing amp=1.2 against amp=1.0 baseline drops the score to ~0.87 in our regression suite — the engine reports a real level/scale shift. |
A baseline anchors absolute scale, so a uniform gain becomes a measurable structural deviation. |
If you want amplitude-invariance in baseline mode (e.g., comparing two ECG
recordings from different machines): z-normalize or min-max normalize both
signals before sending. The recipes layer's feature_ensemble includes
z_norm as a default channel for exactly this reason.
import numpy as np
def z_norm(x): return (np.array(x) - np.mean(x)) / (np.std(x) + 1e-9)
result = client.analyze(
signal=z_norm(observation),
baseline=z_norm(historical),
domain="biomedical",
)
The full guarantee text is in client.guide()["deterministic_guarantees"].
Domains
| Domain | Use case |
|---|---|
generic |
Default — works for any signal |
biomedical |
ECG, EEG, EMG, SpO2 |
finance |
Market prices, returns, volume |
power_grid |
Power grid frequency, load (alias: energy, power, grid) |
seismic |
Earthquakes, vibration sensors (alias: earthquake) |
sensors |
IoT, industrial machinery, SCADA (alias: iot, industrial) |
ai_ml |
Model drift, data quality (alias: mlops, ml, ai) |
security |
Network traffic, intrusion (alias: cyber) |
traffic |
Network / urban traffic flow (alias: network, net) |
Aliases are auto-resolved. The API accepts both canonical names and the registered aliases on every analyze endpoint — pass
domain="energy","mlops","industrial","cyber","fintech","biomed","network", etc. and the server resolves to the canonical domain (no HTTP 400). The live alias map is exposed inclient.version()underdomains.aliasesfor inspection.
Guides
All guide content is available programmatically via client.guide() and the live API at GET /v1/guide:
guide = client.guide() # returns all 15 sections, no auth required
guide["common_mistakes"] # 10 pitfalls with symptoms and fixes
guide["performance_tips"] # fast mode, batch vs loop, HTTP/2, retry tuning
guide["debugging_tips"] # step-by-step troubleshooting + error hierarchy
guide["endpoints"] # all endpoints — when to use, latency, quota cost
Full markdown versions are also included in the installed package under alphainfo/guides/.
Recipes — higher-level patterns
The API surface is intentionally minimal: analyze, analyze_batch,
analyze_vector, fingerprint. For composed patterns, the
recipes/
directory in the public repo ships reference implementations:
| Recipe | What it does |
|---|---|
feature_ensemble |
Encode a signal multiple ways (raw, z_norm, rms, spectrum, autocorr, histogram, ...) and run them as channels of analyze_vector. Per-channel scores reveal which axis changed. |
windowed |
Slide a window over a long signal and analyze_batch each — finds where in the signal the change happened. |
parameter_search |
Grid → analyze_batch with observation as baseline → top-3 contains the truth. Calibration without gradients. |
schema_drift |
Hash JSON paths into a frequency vector and analyze it — detects field/type drift in event streams. |
motif_search |
Slide a window of motif length, batch each against the motif → find a known pattern in a long history. |
auto_diagnose |
Probe library + benign-control gate — answers "what KIND of change happened?" Domain-specific probe libraries for finance, biomedical, sensors (industrial), ai_ml (mlops drift) and security (SOC/log anomalies). |
event_grammar |
n-gram / transition encoding for log/event streams — detects grammar drift independent of token values. |
intents |
User-intent → recipe dispatcher. dispatch(intent="regime_change", domain="finance") chains windowed + auto_diagnose with finance probes. |
encoding_guide ⚡ meta-recipe |
"I have a signal — which encoder do I use?" discover_encoding(signal) profiles your data (range, kurtosis, periodicity, trend, categorical-likeness) and recommends ranked encoders with reasoning. auto_encode() applies them via feature_ensemble. Works for ANY 1-D signal, not just the 5 calibrated verticals. See docs/ENCODING_GUIDE.md. |
Walkthroughs end-to-end (no API key, in-process):
# Detect the COVID-19 crash on real SPY data
python -m recipes.notebooks.finance_walkthrough
# Diagnose 3 known events in a synthetic ECG capture
python -m recipes.notebooks.ecg_walkthrough
# Same pipeline on real PhysioNet MIT-BIH records (records 100 + 200)
python -m recipes.notebooks.ecg_physionet_walkthrough
# Diagnose 3 motor-condition snapshots vs commissioning baseline
python -m recipes.notebooks.vibration_walkthrough
# Long tail: discover the right encoder for ANY 1-D signal you have
python -m recipes.notebooks.encoding_walkthrough
- Manifest:
GET /v1/recipes(live JSON) - Full prose docs: docs/RECIPES.md
- Encoding guide: docs/ENCODING_GUIDE.md
- HTML overview: alphainfo.io/recipes
Recipes are NOT part of the SDK contract — they live in the open repo as copy/adapt-friendly reference implementations. The right encoding / windowing decisions are domain-specific; locking them into the SDK would either bloat the API surface or apply transforms users didn't ask for.
Links
- API Documentation
- Recipes — composed patterns + walkthroughs
- Benchmarks
- Dashboard
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file alphainfo-1.5.21.tar.gz.
File metadata
- Download URL: alphainfo-1.5.21.tar.gz
- Upload date:
- Size: 81.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a30fc98fbec1d5f6ec52baf21f0197ed8827fc2956244549e7872c007727d465
|
|
| MD5 |
61fe22773a97e642fb9c57f0c628df68
|
|
| BLAKE2b-256 |
8a193a542252a1d517e70f821c01026ccb7f4fc2c6e6a1cd0f6e5f1b7e4bdf37
|
File details
Details for the file alphainfo-1.5.21-py3-none-any.whl.
File metadata
- Download URL: alphainfo-1.5.21-py3-none-any.whl
- Upload date:
- Size: 39.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2e628a83a290493e25191078a0873c604497ada3fab9f2cf4df2ed2fd2c21559
|
|
| MD5 |
319b259bfb1f8761f809df4ba0ebfda8
|
|
| BLAKE2b-256 |
c8daf9fbe7f4e770bff2f7a6941e6797c6ceadda3261430d0a4d67184cf3bfc0
|