Skip to main content

Statistical correlation between time-series and discrete events with optional LLM narration

Project description

chrono-correlator

A generic statistical engine that correlates time-series data with discrete events using Mann-Whitney U, and narrates results with an LLM only when p < 0.05.

Install

# Core (statistics only — no LLM required)
pip install chrono-correlator

# With specific LLM provider
pip install chrono-correlator[groq]
pip install chrono-correlator[anthropic]
pip install chrono-correlator[ollama]      # local, no API key

# Everything
pip install chrono-correlator[all]

Quick start

from datetime import datetime, timedelta
from chrono_correlator import Event, Metric, evaluate, narrate

base = datetime(2024, 1, 1)

events = [
    Event(timestamp=base + timedelta(days=d), label="migraine")
    for d in [10, 20, 30]
]

timestamps = [base + timedelta(hours=h) for h in range(800)]
values = [55.0] * 800
for day in [10, 20, 30]:
    for h in range(48):
        idx = day * 24 - 48 + h
        if 0 <= idx < 800:
            values[idx] = 28.0

hrv = Metric(name="hrv", timestamps=timestamps, values=values)

report = evaluate(events, [hrv])
print(f"Level: {report.level}{report.active_signals}/{report.total_signals} signals")

if report.level != "green":
    report = narrate(report, provider="groq")
    print(report.narrative)

From a pandas DataFrame

import pandas as pd
from chrono_correlator import Metric

df = pd.read_csv("hrv_data.csv")   # columns: timestamp, value
hrv = Metric.from_dataframe(df, name="hrv", timestamp_col="timestamp", value_col="value")

Lag sweep — find the best anticipatory window automatically

from chrono_correlator import find_best_lag

results = find_best_lag(events, hrv_metric, lag_range=range(0, 72, 6))

best = max(results, key=lambda k: results[k].causality_score)
print(f"Strongest signal at lag={best}h — causality={results[best].causality_score:.2f}")

Bootstrap confidence interval for effect size

report = evaluate(events, [hrv], bootstrap_ci=True)   # ~1s per metric
r = report.results[0]
print(f"Effect: {r.effect_size:.3f}  95% CI: [{r.effect_ci[0]:.3f}, {r.effect_ci[1]:.3f}]")

If the CI excludes 0, the effect is unlikely to be sampling noise.

Seasonal baseline correction

# Compare pre-event window only against same day of the week in the baseline
# Eliminates false positives caused by weekly patterns (e.g. traffic every Friday)
report = evaluate(events, metrics, baseline_strategy="same_weekday")

# Compare against same hour of the day — for circadian metrics (HRV, temperature)
report = evaluate(events, metrics, baseline_strategy="same_hour")

Directional analysis

# Only flag metrics that DROP before events (e.g. HRV decrease before migraine)
report = evaluate(events, metrics, direction="decrease")

# Only flag metrics that RISE before events (e.g. heart rate spike before incident)
report = evaluate(events, metrics, direction="increase")

Custom significance thresholds

from chrono_correlator import SignificanceConfig

cfg = SignificanceConfig(alpha=0.01, strong_effect=0.35, strong_consistency=0.75)
report = evaluate(events, metrics, config=cfg)

Overlapping event windows

When two events are closer together than lookback_hours, evaluate() emits a UserWarning automatically:

UserWarning: Events 'migraine' (2024-01-10) and 'migraine' (2024-01-11) are 24h apart —
pre-event windows overlap (lookback=48h). Pooled results may be inflated.

Persistence — save and reload reports

from chrono_correlator import save_report, load_reports
from datetime import datetime, timedelta

# Save to SQLite (stdlib, no extra dependencies)
row_id = save_report(report, db_path="chrono.db")

# Load all reports
history = load_reports("chrono.db")

# Filter by level or time window
alerts = load_reports("chrono.db", level="red")
recent  = load_reports("chrono.db", since=datetime.now() - timedelta(days=7))

Export to HTML and Markdown

from chrono_correlator import export_html, export_markdown

export_html(report, "report.html")         # self-contained HTML with table + narratives
export_markdown(report, "report.md")       # GitHub-ready Markdown — paste into issues/PRs

LLM narration with audit trail

# Every LLM call is logged to a JSONL file: stats + prompt + response
# Required for audits in regulated environments (health, industry)
report = narrate(report, provider="groq", audit_log="audit.jsonl")

Each audit entry:

{
  "ts": "2024-06-01T14:23:11",
  "metric": "hrv",
  "stats": {"p_value": 0.003, "effect_size": -0.41, "causality_score": 0.68, ...},
  "prompt": "Datos estadísticos CALCULADOS...",
  "response": "Patrón detectado en HRV antes del evento."
}

Continuous monitoring (no events needed)

from chrono_correlator import monitor, loop

# Single evaluation at now()
report = monitor(metrics, narrate=False)

# Infinite loop — calls on_alert when level is yellow or red
def alert_handler(report):
    save_report(report)
    export_html(report, f"alert_{datetime.now():%Y%m%d_%H%M}.html")

loop(metrics_fn=lambda: metrics, interval_seconds=3600, on_alert=alert_handler)

CLI

chrono analyze metrics.csv events.csv --name hrv --correction fdr
chrono analyze metrics.csv events.csv --json
chrono analyze metrics.csv events.csv --direction decrease --baseline-strategy same_weekday
chrono analyze metrics.csv events.csv --narrate --provider anthropic

Custom LLM provider

from chrono_correlator import BaseNarrator

class MyNarrator(BaseNarrator):
    def generate(self, prompt: str) -> str:
        # call any local or remote model
        ...

report = MyNarrator().narrate(report)

Adapter recipes — connect live sources without built-in connectors

Prometheus

import requests
from datetime import datetime, timedelta
from chrono_correlator import Metric

def prometheus_metric(query: str, url: str = "http://localhost:9090") -> Metric:
    end = datetime.now()
    start = end - timedelta(days=35)
    r = requests.get(f"{url}/api/v1/query_range", params={
        "query": query, "start": start.timestamp(),
        "end": end.timestamp(), "step": "1h",
    })
    data = r.json()["data"]["result"][0]["values"]
    return Metric(
        name=query,
        timestamps=[datetime.fromtimestamp(float(t)) for t, _ in data],
        values=[float(v) for _, v in data],
    )

cpu = prometheus_metric("rate(node_cpu_seconds_total[5m])")
report = evaluate(events, [cpu])

InfluxDB

from influxdb_client import InfluxDBClient
from chrono_correlator import Metric

def influx_metric(bucket: str, measurement: str, field: str, url: str, token: str) -> Metric:
    client = InfluxDBClient(url=url, token=token, org="my-org")
    query = f'from(bucket:"{bucket}") |> range(start:-35d) |> filter(fn:(r) => r._measurement == "{measurement}" and r._field == "{field}")'
    tables = client.query_api().query(query)
    rows = [(r.get_time(), r.get_value()) for table in tables for r in table.records]
    return Metric(name=field, timestamps=[t for t, _ in rows], values=[v for _, v in rows])

Watching a live CSV file

from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
from chrono_correlator import Metric
import pandas as pd

class CsvWatcher(FileSystemEventHandler):
    def __init__(self, path: str, name: str, on_update):
        self.path, self.name, self.on_update = path, name, on_update

    def on_modified(self, event):
        if event.src_path == self.path:
            df = pd.read_csv(self.path)
            metric = Metric.from_dataframe(df, name=self.name)
            self.on_update(metric)

Generic REST API

import requests
from chrono_correlator import Metric

def api_metric(url: str, name: str, ts_field="timestamp", val_field="value") -> Metric:
    data = requests.get(url).json()
    return Metric(
        name=name,
        timestamps=[datetime.fromisoformat(row[ts_field]) for row in data],
        values=[float(row[val_field]) for row in data],
    )

Interactive notebook

examples/dashboard.ipynb — full pipeline with matplotlib visualizations, lag sweep chart, and bootstrap CI plot. No UI server required.

Key finding: p-value alone is not enough

Statistical significance (p < 0.05) can appear in large samples even with no real pattern. Effect size + consistency is what separates real signals from statistical noise.

Dataset p-value Effect Consistency Causality score Signal
Real pattern < 0.001 0.289 0.86 0.64 strong
Flat metrics 0.09* -0.005 ~0.4 ~0.2 none
Shuffled 0.55 0.000 ~0.5 0.25 none

* p < 0.05 in some metrics due to large sample size — effect size and consistency correctly identify these as noise.

CorrelationResult includes:

  • consistency — fraction of events individually showing the pattern (0–1)
  • signal_strength"strong" / "moderate" / "weak" / "none"
  • causality_score — composite score: 0.5 × |effect| + 0.5 × consistency (0–1)
  • effect_ci — 95% bootstrap confidence interval (low, high) when bootstrap_ci=True

significant = True only when p < alpha AND signal_strength in ("strong", "moderate").

How it works

  • Statistical core: For each metric, values in the pre-event window (default: 48 h before, configurable lag) are compared against a 28-day baseline using Mann-Whitney U. Effect size is computed as rank-biserial correlation.
  • Multiple comparison correction: When analysing several metrics simultaneously, FDR (Benjamini-Hochberg) correction is applied by default to control false positives. Bonferroni is also available.
  • Alert level: Corrected active signals are counted. 1–2 → green, 3–4 → yellow, 5–7 → red.
  • LLM narration: Only triggered on yellow or red. The model receives pre-calculated statistics and is constrained to one factual sentence per signal — no diagnosis, no causal inference.

Use cases

  • Health monitoring — correlate HRV, deep sleep, or skin temperature drops with migraine or crisis events.
  • Infrastructure — detect latency or error-rate anomalies preceding service outages.
  • IPTV / streaming — link buffering load spikes to subscriber disconnection events.
  • Energy consumption — associate power demand patterns with grid stress or equipment failures.

License

MIT — Raúl Gallardo (g3v3r)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chrono_correlator-1.0.0.tar.gz (29.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chrono_correlator-1.0.0-py3-none-any.whl (30.8 kB view details)

Uploaded Python 3

File details

Details for the file chrono_correlator-1.0.0.tar.gz.

File metadata

  • Download URL: chrono_correlator-1.0.0.tar.gz
  • Upload date:
  • Size: 29.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for chrono_correlator-1.0.0.tar.gz
Algorithm Hash digest
SHA256 ea1a5863c83f9b5a0dfca83aef509e75ea65d2d74bb0ce6efa5c1920200a28ab
MD5 2e17e1fb6da51ca46a852caff33956f3
BLAKE2b-256 0b23ae7d90f44c01d403eb32094506223268021fc026d3f024ab1e7b26111e6b

See more details on using hashes here.

File details

Details for the file chrono_correlator-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for chrono_correlator-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 17c5e5591b3b0ee35d1b9d67f1826409b44061c0f8504d528d29c9ca1be90849
MD5 318b6734a40e04b25273efd15b71bdf6
BLAKE2b-256 42e916a4dad77da75d04aa1f3e316a913e0a29e829a210cdcfd268869c3aec70

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page