Skip to main content

An embedded, event-time metric collection library

Project description

Event Metrics

An embedded, event-time metric collection library built for serving system

Metric systems like prometheus aggregate metric at “processing time”, whenever the scraper is able to scrape the metric. event_metrics capture and record metrics at event time, when the event happens.

Features

Comparing to other metric system, the event_metrics library:

  • Write data to sqlite3 database to keep low-memory footprint

  • Minimal dependency (only requires numpy for numeric aggregation)

  • Aggregate with the full data by default (no reservior sampling)

  • Allow select from past duration with timedelta windowing

  • Timestamp all observation by default

  • Small API footprint: observe and query is all you need to know

  • Compute raw timeseries with different aggregation strategies:

    • Scalers: last, min, max, mean, count, sum

    • Buckets for histogram

    • Percentiles for summary

    • Array and timestamps for native python wrangling

  • Metrics can be labeled with arbitrary key value pair and querying supports multidimensional label matching.

Install

  • Install from source: pip install -e .

  • PyPI package is work in progress

Usage

from event_metrics import MetricConnection

conn = MetricConnection("/tmp/event_metrics_demo")

conn.observe("latency", 1.2)
conn.increment("counter", -1)

# labeling
conn.observe("latency", 2.0, labels={"service": "myapp", "other": "label"})

# querying
(conn.query("latency", labels={"service":"myapp"})
      # select from past duration using one of the following
     .from_beginning()
     .from_timestamp(...)
     .from_timedelta(...)

      # perform aggregation using one of the following
     .to_scaler(agg="last/min/max/mean/count/sum") # counter, guage
     .to_buckets(buckets=[1, 5, 10], cumulative=False) # histogram
     .to_percentiles(percnetiles=[50, 90, 99]) # summary
     .to_array() # -> value array
     .to_timestamps() # -> timestamp array
     .to_timestamps_array() # -> 2 array, (timestamp, value array)
)

Speed

The library is fast enough. It can ingest about 34,000 data point per seconds:

You can run pytest tests -k bench to generate benchmark on local hardware:

-------------------------------------------------- benchmark: 1 tests --------------------------------------------------
Name (time in us)            Min       Max     Mean  StdDev   Median     IQR  Outliers  OPS (Kops/s)  Rounds  Iterations
------------------------------------------------------------------------------------------------------------------------
test_bench_ingestion     25.3340  297.9320  28.8541  9.1582  26.8090  0.8650   521;814       34.6571    6496           1
------------------------------------------------------------------------------------------------------------------------

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

event_metrics-0.2.0.tar.gz (7.9 kB view details)

Uploaded Source

File details

Details for the file event_metrics-0.2.0.tar.gz.

File metadata

  • Download URL: event_metrics-0.2.0.tar.gz
  • Upload date:
  • Size: 7.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.26.0 requests-toolbelt/0.9.1 urllib3/1.26.6 tqdm/4.62.2 importlib-metadata/4.2.0 keyring/23.5.0 rfc3986/1.5.0 colorama/0.4.4 CPython/3.7.11

File hashes

Hashes for event_metrics-0.2.0.tar.gz
Algorithm Hash digest
SHA256 92293d1afd45a5e26c1edbef785530434a52773e57116eb59c463805de9dd0fe
MD5 b923f1cf916fbc39e27ec112f2ccec0b
BLAKE2b-256 6c0a131512008a0ef45899008e72c3a22264d680dc7d1b7414240c58fb225fe6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page