Skip to main content

Python project

Project description

https://secure.travis-ci.org/kpn-digital/py-timeexecution.svg?branch=master https://img.shields.io/codecov/c/github/kpn-digital/py-timeexecution/master.svg https://img.shields.io/pypi/v/py-timeexecution.svg https://readthedocs.org/projects/py-timeexecution/badge/?version=latest

This package is designed to record metrics of the application into a backend. With the help of grafana you can easily create dashboards with them

Features

  • Sending data to multiple backends

  • Custom backends

  • Hooks

Backends

  • InfluxDB 0.8

  • Elasticsearch 2.1

Installation

$ pip install py-timeexecution

Usage

To use this package you decorate the functions you want to time its execution. Every wrapped function will create a metric consisting of 3 default values:

  • name - The name of the series the metric will be stored in

  • value - The time it took in ms for the wrapped function to complete

  • hostname - The hostname of the machine the code is running on

See the following example

from time_execution import configure, time_execution
from time_execution.backends.influxdb import InfluxBackend
from time_execution.backends.elasticsearch import ElasticsearchBackend

# Setup the desired backend
influx = InfluxBackend(host='influx', database='metrics', use_udp=False)
elasticsearch = ElasticsearchBackend('elasticsearch', index='metrics')

# Configure the time_execution decorator
configure(backends=[influx, elasticsearch])

# Wrap the methods where u want the metrics
@time_execution
def hello():
    return 'World'

# Now when we call hello() and we will get metrics in our backends
hello()

This will result in an entry in the influxdb

[
    {
        "name": "__main__.hello",
        "columns": [
            "time",
            "sequence_number",
            "value",
            "hostname",
        ],
        "points": [
            [
                1449739813939,
                1111950001,
                312,
                "machine.name",
            ]
        ]
    }
]

And the following in Elasticsearch

[
    {
        "_index": "metrics-2016.01.28",
        "_type": "metric",
        "_id": "AVKIp9DpnPWamvqEzFB3",
        "_score": null,
        "_source": {
            "timestamp": "2016-01-28T14:34:05.416968",
            "hostname": "dfaa4928109f",
            "name": "__main__.hello",
            "value": 312
        },
        "sort": [
            1453991645416
        ]
    }
]

Hooks

time_execution supports hooks where you can change the metric before its being send to the backend.

With a hook you can add additional and change existing fields. This can be useful for cases where you would like to add a column to the metric based on the response of the wrapped function.

A hook will always get 3 arguments:

  • response - The returned value of the wrapped function

  • exception - The raised exception of the wrapped function

  • metric - A dict containing the data to be send to the backend

  • func_args - Original args received by the wrapped function.

  • func_kwargs - Original kwargs received by the wrapped function.

From within a hook you can change the name if you want the metrics to be split into multiple series.

See the following example how to setup hooks.

# Now lets create a hook
def my_hook(response, exception, metric, func_args, func_kwargs):
    status_code = getattr(response, 'status_code', None)
    if status_code:
        return dict(
            name='{}.{}'.format(metric['name'], status_code),
            extra_field='foo bar'
        )

# Configure the time_execution decorator, but now with hooks
configure(backends=[backend], hooks=[my_hook])

Manually sending metrics

You can also send any metric you have manually to the backend. These will not add the default values and will not hit the hooks.

See the following example.

loadavg = os.getloadavg()
write_metric('cpu.load.1m', value=loadavg[0])
write_metric('cpu.load.5m', value=loadavg[1])
write_metric('cpu.load.15m', value=loadavg[2])

Custom Backend

Writing a custom backend is very simple, all you need to do is create a class with a write method. It is not required to extend BaseMetricsBackend but in order to easily upgrade I recommend u do.

from time_execution.backends.base import BaseMetricsBackend


class MetricsPrinter(BaseMetricsBackend):
    def write(self, name, **data):
        print(name, data)

Contribute

You have something to contribute ? Great ! A few things that may come in handy.

Testing in this project is done via docker. There is a docker-compose to easily get all the required containers up and running.

There is a Makefile with a few targets that we use often:

  • make test

  • make isort

  • make lint

  • make build

  • make setup.py

All of these make targets can be prefixed by docker/. This will execute the target inside the docker container instead of on your local machine. For example make docker/build.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py-timeexecution-1.3.0.tar.gz (18.2 kB view details)

Uploaded Source

File details

Details for the file py-timeexecution-1.3.0.tar.gz.

File metadata

File hashes

Hashes for py-timeexecution-1.3.0.tar.gz
Algorithm Hash digest
SHA256 65b4b48590877662a0217289dbc5033acb3544c5227f3576272278e00a8dfe8d
MD5 7755ae631021ab2e09e37f692baa8aa5
BLAKE2b-256 e25314170006583c1e773832abd8a66b4b334c3ff4c03cab59fee402fd3f0ee4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page