Skip to main content

A reproducible benchmark framework for evaluating long-range dependence estimators on canonical, contaminated, and observational time series.

Project description

lrdbench

A reproducible benchmark framework for evaluating long-range dependence estimators on canonical, contaminated, and observational time series.

Documentation: lrdbench.readthedocs.io (built with MkDocs and Read the Docs).

Current public release: 1.0.2. No DOI is attached yet; cite the software using CITATION.cff and the GitHub release.

lrdbench is a research-oriented benchmarking framework for studying the behaviour of long-range dependence (LRD) estimators across three distinct settings:

  • ground-truth mode for canonical synthetic time series with declared target truth;
  • stress-test mode for synthetic time series under controlled contamination;
  • observational mode for biomedical or user-provided time series without benchmark truth.

The framework is designed to support:

  • rigorous comparison of classical and new LRD estimators;
  • uncertainty-aware benchmarking, including empirical interval coverage where applicable;
  • robustness analysis under heavy tails, nonstationarity, artefacts, level shifts, outliers, polynomial trends, and low-frequency contamination;
  • transparent failure analysis and validity-rate reporting;
  • manifest-driven, provenance-complete, reproducible benchmark execution.

Why this project exists

There is currently no widely adopted, comprehensive, reproducible benchmark specifically designed for long-range dependence estimation that simultaneously addresses:

  • canonical synthetic processes with known targets;
  • contamination-induced estimator instability;
  • uncertainty quantification and interval coverage;
  • observational biomedical time series with no benchmark truth;
  • extensible enrolment of new estimators under a common interface.

lrdbench aims to fill that gap.

It is especially intended to support the careful evaluation of the hypothesis that many classical second-order LRD estimators behave well in their intended stationary finite-variance regime, but become unstable, miscalibrated, or non-identifiable under nonstationarity, heavy-tailed fluctuations, artefacts, and other out-of-regime conditions.


Scope

lrdbench is a research benchmark framework. It is not:

  • a clinical decision system;
  • a diagnostic tool;
  • a guarantee of “true LRD” in arbitrary empirical signals;
  • a universal ranking oracle for all estimators in all regimes.

Benchmark results must always be interpreted in light of:

  • the declared benchmark mode;
  • the target estimand;
  • the source specification;
  • the contamination design;
  • the metric definitions;
  • the aggregation and leaderboard rules.

See RESEARCH_USAGE.md for the full policy.


Core features

Benchmark modes

  • Ground-truth mode

    • bias, MAE, RMSE
    • empirical coverage
    • interval width
    • validity rate
    • runtime and efficiency
  • Stress-test mode

    • estimate drift
    • degradation ratios
    • validity collapse
    • coverage collapse
    • robustness leaderboards
  • Observational mode

    • instability across windows
    • preprocessing sensitivity
    • resampling variability
    • failure analysis
    • stability leaderboards

Supported data sources

  • implemented synthetic generators:
    • fGn
    • fBm
    • ARFIMA(0,d,0)
    • MRW
    • fOU
  • contaminated synthetic pipelines
  • custom CSV datasets
  • future observational/API-based datasets

Reporting

  • HTML reports
  • Markdown reports
  • CSV exports
  • Parquet result stores
  • JSON metadata exports
  • LaTeX tables for publication workflows

Extensibility

  • pluggable estimator interface
  • manifest-driven benchmark runs
  • explicit estimator metadata and estimand declarations
  • registry-based component enrolment

Design principles

lrdbench is built around a few non-negotiable principles:

  1. Explicit estimands
    Every estimator must declare the quantity it is intended to estimate.

  2. Mode-aware evaluation
    Truth-based metrics are not used where truth does not exist.

  3. Failure transparency
    Invalid outputs, crashes, and missing uncertainty are recorded explicitly.

  4. Provenance preservation
    Every benchmark result is traceable to a manifest, source, estimator configuration, and software version.

  5. Reproducibility first
    A benchmark run should be reproducible from a single manifest plus the relevant package version and data sources.


Installation

Core installation

pip install lrdbench

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lrdbench-1.0.2.tar.gz (115.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lrdbench-1.0.2-py3-none-any.whl (85.3 kB view details)

Uploaded Python 3

File details

Details for the file lrdbench-1.0.2.tar.gz.

File metadata

  • Download URL: lrdbench-1.0.2.tar.gz
  • Upload date:
  • Size: 115.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for lrdbench-1.0.2.tar.gz
Algorithm Hash digest
SHA256 b049575284b82a0003b44438d13c8271efda2f24d089917ef84f62fd1d495fdc
MD5 e557cab7c0a805ebca69ecb540bf229d
BLAKE2b-256 b885e8af557e502c4ed6ae55853b124fdb777e91c17bd5131a8d430c8693d697

See more details on using hashes here.

Provenance

The following attestation bundles were made for lrdbench-1.0.2.tar.gz:

Publisher: release.yml on dave2k77/lrdbench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lrdbench-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: lrdbench-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 85.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for lrdbench-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 20a1a207d038d8e455219b6351cdc84d8fef2e22bfa7c78de105214cf0155fbc
MD5 b8b9e8140695aa458628642c4d43c938
BLAKE2b-256 d637583fde3073baff438baa89856ef3a24e7457ef3ae4740fe2e6419cf08210

See more details on using hashes here.

Provenance

The following attestation bundles were made for lrdbench-1.0.2-py3-none-any.whl:

Publisher: release.yml on dave2k77/lrdbench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page