Skip to main content

A helper to make writing/reading metrics to Redis more convenient.

Project description

redis-metric-helper

WARNING: This is still under active development.

Handles some of the more tedious parts of logging and reading metrics that get logged to Redis. Counters, gauges and timeseries data. Requires RedisStack.

Why does this package exist?

A helper to make writing/reading metrics to Redis more convenient. Allows counters, gauges (including postive-only gauges) and timeseries data.

Quickstart

  1. Install the package:

    pip install redis-metric-helper
    
  2. Initialize the package:

    from metric_helper import metrics
    
    metrics.setup(
        connection_dict={
            'host': 'localhost', # Default
            'port': 6379, # Default
            'password': 'SuperS3kr!t',
            'socket_connect_timeout': 5, # Default
            'health_check_interval': 30, # Default
        },
        timezone='Africa/Johannesburg',
    )
    
  3. Create/get a metric:

    timeseries = metrics.get(
        'http_requests', # Redis key
        'timeseries', # Default
        round_timestamp_to='second',
    )
    timeseries.add_sample(
        value=1,
        duplicate_policy='sum',
        round_timestamp_to='second',
    )
    # Equivalent to add_sample() with above kwargs.
    timeseries.incr()
    
    counter = metrics.get('http_requests_total_count', 'counter')
    counter.incr()
    
    gauge = metrics.get('my_gauge', 'gauge')
    gauge.incr()
    
    pos_gauge = metrics.get('my_pos_gauge', 'pos_gauge')
    pos_gauge.incr()
    pos_gauge.decr()
    
  4. Query the metric:

    from datetime import datetime, timedelta
    
    end = datetime.now()
    start = end - timedelta(hours=24)
    results = timeseries.range(
        start=start, # Also allows "-"
        end=end, # Also allows "+"
        bucket_secs=3600, # Default
        empty=True, # Default
        agg_type='sum', # Default
        pipeline=None, # Default
    )
    
    count = counter.get()
    gauge_result = gauge.get()
    pos_gauge_result = pos_gauge.get()
    
  5. Run commands in a Redis pipeline:

    from metric_helper import pipeline
    results = pipeline([
        timeseries.range(start='-', end='+', bucket_secs=3600, defer=True),
        timeseries.range(start=start, end=end, bucket_secs=3600, defer=True),
        timeseries.add_sample(value=1, defer=True),
        timeseries.add_sample(value=1, defer=True),
        timeseries.incr(defer=True),
        counter.incr(defer=True),
    ])
    
  6. Add compaction rules. To create a compaction rule for an hourly aggregate:

    timeseries.add_rule(
        agg_type='sum',
        bucket_secs=3600,
        retention_days=120,
    )
    
    # If source key is named "http_requests", this will create a new key
    # named "http_requests--agg_3600_sum"
    source_key = 'http_requests'
    dest_key = f'{source_key}--agg_{bucket_secs}_{agg_type}'
    

    Or, optionally use the very opinionated auto_add_rules method:

    timeseries.auto_add_rules()
    

    auto_add_rules will create five compaction rules equal to the following:

    timeseries.add_rule(
        agg_type='sum',
        bucket_secs=60,
        retention_days=15,
    )
    timeseries.add_rule(
        agg_type='sum',
        bucket_secs=900,
        retention_days=31,
    )
    timeseries.add_rule(
        agg_type='sum',
        bucket_secs=3600,
        retention_days=367,
    )
    timeseries.add_rule(
        agg_type='sum',
        bucket_secs=86400,
        retention_days=367,
    )
    

Recommendations on metric naming conventions

These are really just suggestions but a possible naming convention could be something like this:

{prefix}:{metric_root_name}:{noun}:{noun_identifier}:{modifier_of_metric}

The prefix should be the package/component the metric is related to.

For example, for a component/app named "uploads" we might have a metric named "filesize":

uploads:filesize

Then, all the filesizes for a specific user's uploads:

uploads:filesize:user:{user_id}

And then perhaps the filesize of all uploads by that user that were identified as images:

uploads:filesize:user:{user_id}:images

For a metric named "failures":

uploads:failures

However, we might want to know how many timeouts occurred for any given user:

uploads:failures:user:{user_id}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

redis-metric-helper-0.7.6.tar.gz (22.7 kB view details)

Uploaded Source

Built Distribution

redis_metric_helper-0.7.6-py3-none-any.whl (26.3 kB view details)

Uploaded Python 3

File details

Details for the file redis-metric-helper-0.7.6.tar.gz.

File metadata

  • Download URL: redis-metric-helper-0.7.6.tar.gz
  • Upload date:
  • Size: 22.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for redis-metric-helper-0.7.6.tar.gz
Algorithm Hash digest
SHA256 4c8c78e515a85b61f1875a2c9451ca9a1bc64079d76eba455dfeeaed279e7ecf
MD5 5c5734d56beaaa1341efa01e7d32c2a9
BLAKE2b-256 d87855e13ed71fec35a8e003583f81c276d198ed589e3a31ed848f794227bd74

See more details on using hashes here.

File details

Details for the file redis_metric_helper-0.7.6-py3-none-any.whl.

File metadata

File hashes

Hashes for redis_metric_helper-0.7.6-py3-none-any.whl
Algorithm Hash digest
SHA256 f7d61a4fe1dc4e36b987489fcb25a870bb40fe37b471ca29750437afa3905e1e
MD5 174ef9acc9e713f59ded1ce5aec31954
BLAKE2b-256 7aeafa24570fb93fa78bc8e0cb2694fbaeadbf81fd16cc5f7870c543feb20767

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page