Skip to main content

Send performance metrics about Python code to Statsd

Project description

perfmetrics

The perfmetrics package provides a simple way to add software performance metrics to Python libraries and applications. Use perfmetrics to find the true bottlenecks in a production application.

The perfmetrics package is a client of the Statsd daemon by Etsy, which is in turn a client of Graphite (specifically, the Carbon daemon). Because the perfmetrics package sends UDP packets to Statsd, perfmetrics adds no I/O delays to applications and little CPU overhead. It can work equally well in threaded (synchronous) or event-driven (asynchronous) software.

Complete documentation is hosted at https://perfmetrics.readthedocs.io

Latest release Supported Python versions CI Build Status Code Coverage Documentation Status

Usage

Use the @metric and @metricmethod decorators to wrap functions and methods that should send timing and call statistics to Statsd. Add the decorators to any function or method that could be a bottleneck, including library functions.

Sample:

from perfmetrics import metric
from perfmetrics import metricmethod

@metric
def myfunction():
    """Do something that might be expensive"""

class MyClass(object):
    @metricmethod
    def mymethod(self):
        """Do some other possibly expensive thing"""

Next, tell perfmetrics how to connect to Statsd. (Until you do, the decorators have no effect.) Ideally, either your application should read the Statsd URI from a configuration file at startup time, or you should set the STATSD_URI environment variable. The example below uses a hard-coded URI:

from perfmetrics import set_statsd_client
set_statsd_client('statsd://localhost:8125')

for i in xrange(1000):
    myfunction()
    MyClass().mymethod()

If you run that code, it will fire 2000 UDP packets at port 8125. However, unless you have already installed Graphite and Statsd, all of those packets will be ignored and dropped. Dropping is a good thing: you don’t want your production application to fail or slow down just because your performance monitoring system is stopped or not working.

Install Graphite and Statsd to receive and graph the metrics. One good way to install them is the graphite_buildout example at github, which installs Graphite and Statsd in a custom location without root access.

Pyramid and WSGI

If you have a Pyramid app, you can set the statsd_uri for each request by including perfmetrics in your configuration:

config = Configuration(...)
config.include('perfmetrics')

Also add a statsd_uri setting such as statsd://localhost:8125. Once configured, the perfmetrics tween will set up a Statsd client for the duration of each request. This is especially useful if you run multiple apps in one Python interpreter and you want a different statsd_uri for each app.

Similar functionality exists for WSGI apps. Add the app to your Paste Deploy pipeline:

[statsd]
use = egg:perfmetrics#statsd
statsd_uri = statsd://localhost:8125

[pipeline:main]
pipeline =
    statsd
    egg:myapp#myentrypoint

Threading

While most programs send metrics from any thread to a single global Statsd server, some programs need to use a different Statsd server for each thread. If you only need a global Statsd server, use the set_statsd_client function at application startup. If you need to use a different Statsd server for each thread, use the statsd_client_stack object in each thread. Use the push, pop, and clear methods.

Graphite Tips

Graphite stores each metric as a time series with multiple resolutions. The sample graphite_buildout stores 10 second resolution for 48 hours, 1 hour resolution for 31 days, and 1 day resolution for 5 years. To produce a coarse grained value from a fine grained value, Graphite computes the mean value (average) for each time span.

Because Graphite computes mean values implicitly, the most sensible way to treat counters in Graphite is as a “hits per second” value. That way, a graph can produce correct results no matter which resolution level it uses.

Treating counters as hits per second has unfortunate consequences, however. If some metric sees a 1000 hit spike in one second, then falls to zero for at least 9 seconds, the Graphite chart for that metric will show a spike of 100, not 1000, since Graphite receives metrics every 10 seconds and the spike looks to Graphite like 100 hits per second over a 10 second period.

If you want your graph to show 1000 hits rather than 100 hits per second, apply the Graphite hitcount() function, using a resolution of 10 seconds or more. The hitcount function converts per-second values to approximate raw hit counts. Be sure to provide a resolution value large enough to be represented by at least one pixel width on the resulting graph, otherwise Graphite will compute averages of hit counts and produce a confusing graph.

It usually makes sense to treat null values in Graphite as zero, though that is not the default; by default, Graphite draws nothing for null values. You can turn on that option for each graph.

CHANGES

3.1.0 (2021-02-04)

  • Add support for Python 3.8 and 3.9.

  • Move to GitHub Actions from Travis CI.

  • Support PyHamcrest 1.10 and later. See issue 26.

  • The FakeStatsDClient for testing is now always true whether or not any observations have been seen, like the normal clients. See issue.

  • Add support for StatsD sets, counters of unique events. See PR 30.

3.0.0 (2019-09-03)

  • Drop support for EOL Python 2.6, 3.2, 3.3 and 3.4.

  • Add support for Python 3.5, 3.6, and 3.7.

  • Compile the performance-sensitive parts with Cython, leading to a 10-30% speed improvement. See https://github.com/zodb/perfmetrics/issues/17.

  • Caution: Metric names are enforced to be native strings (as a result of Cython compilation); they’ve always had to be ASCII-only but previously Unicode was allowed on Python 2. This is usually automatically the case when used as a decorator. On Python 2 using from __future__ import unicode_literals can cause problems (raising TypeError) when manually constructing Metric objects. A quick workaround is to set the environment variable PERFMETRICS_PURE_PYTHON before importing perfmetrics.

  • Make decorated functions and methods configurable at runtime, not just compile time. See https://github.com/zodb/perfmetrics/issues/11.

  • Include support for testing applications instrumented with perfmetrics in perfmetrics.testing. This was previously released externally as nti.fakestatsd. See https://github.com/zodb/perfmetrics/issues/9.

  • Read the PERFMETRICS_DISABLE_DECORATOR environment variable when perfmetrics is imported, and if it is set, make the decorators @metric, @metricmethod, @Metric(...) and @MetricMod(...) return the function unchanged. This can be helpful for certain kinds of introspection tests. See https://github.com/zodb/perfmetrics/issues/15

2.0 (2013-12-10)

  • Added the @MetricMod decorator, which changes the name of metrics in a given context. For example, @MetricMod('xyz.%s') adds a prefix.

  • Removed the “gauge suffix” feature. It was unnecessarily confusing.

  • Timing metrics produced by @metric, @metricmethod, and @Metric now have a “.t” suffix by default to avoid naming conflicts.

1.0 (2012-10-09)

  • Added ‘perfmetrics.tween’ and ‘perfmetrics.wsgi’ stats for measuring request timing and counts.

0.9.5 (2012-09-22)

  • Added an optional Pyramid tween and a similar WSGI filter app that sets up the Statsd client for each request.

0.9.4 (2012-09-08)

  • Optimized the use of reduced sample rates.

0.9.3 (2012-09-08)

  • Support the STATSD_URI environment variable.

0.9.2 (2012-09-01)

  • Metric can now be used as either a decorator or a context manager.

  • Made the signature of StatsdClient more like James Socol’s StatsClient.

0.9.1 (2012-09-01)

  • Fixed package metadata.

0.9 (2012-08-31)

  • Initial release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perfmetrics-3.1.0.tar.gz (123.0 kB view hashes)

Uploaded Source

Built Distributions

perfmetrics-3.1.0-cp39-cp39-manylinux2010_x86_64.whl (363.3 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp39-cp39-manylinux1_x86_64.whl (363.3 kB view hashes)

Uploaded CPython 3.9

perfmetrics-3.1.0-cp39-cp39-macosx_10_14_x86_64.whl (163.8 kB view hashes)

Uploaded CPython 3.9 macOS 10.14+ x86-64

perfmetrics-3.1.0-cp38-cp38-win_amd64.whl (161.1 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

perfmetrics-3.1.0-cp38-cp38-win32.whl (154.3 kB view hashes)

Uploaded CPython 3.8 Windows x86

perfmetrics-3.1.0-cp38-cp38-manylinux2010_x86_64.whl (376.2 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp38-cp38-manylinux1_x86_64.whl (376.2 kB view hashes)

Uploaded CPython 3.8

perfmetrics-3.1.0-cp38-cp38-macosx_10_14_x86_64.whl (163.1 kB view hashes)

Uploaded CPython 3.8 macOS 10.14+ x86-64

perfmetrics-3.1.0-cp37-cp37m-win_amd64.whl (160.0 kB view hashes)

Uploaded CPython 3.7m Windows x86-64

perfmetrics-3.1.0-cp37-cp37m-win32.whl (153.1 kB view hashes)

Uploaded CPython 3.7m Windows x86

perfmetrics-3.1.0-cp37-cp37m-manylinux2010_x86_64.whl (339.1 kB view hashes)

Uploaded CPython 3.7m manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp37-cp37m-manylinux1_x86_64.whl (339.1 kB view hashes)

Uploaded CPython 3.7m

perfmetrics-3.1.0-cp37-cp37m-macosx_10_14_x86_64.whl (162.7 kB view hashes)

Uploaded CPython 3.7m macOS 10.14+ x86-64

perfmetrics-3.1.0-cp36-cp36m-win_amd64.whl (158.1 kB view hashes)

Uploaded CPython 3.6m Windows x86-64

perfmetrics-3.1.0-cp36-cp36m-win32.whl (151.6 kB view hashes)

Uploaded CPython 3.6m Windows x86

perfmetrics-3.1.0-cp36-cp36m-manylinux2010_x86_64.whl (330.7 kB view hashes)

Uploaded CPython 3.6m manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp36-cp36m-manylinux1_x86_64.whl (330.7 kB view hashes)

Uploaded CPython 3.6m

perfmetrics-3.1.0-cp35-cp35m-win_amd64.whl (158.6 kB view hashes)

Uploaded CPython 3.5m Windows x86-64

perfmetrics-3.1.0-cp35-cp35m-win32.whl (152.2 kB view hashes)

Uploaded CPython 3.5m Windows x86

perfmetrics-3.1.0-cp35-cp35m-manylinux2010_x86_64.whl (332.1 kB view hashes)

Uploaded CPython 3.5m manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp35-cp35m-manylinux1_x86_64.whl (332.1 kB view hashes)

Uploaded CPython 3.5m

perfmetrics-3.1.0-cp27-cp27mu-manylinux2010_x86_64.whl (312.1 kB view hashes)

Uploaded CPython 2.7mu manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp27-cp27mu-manylinux1_x86_64.whl (312.1 kB view hashes)

Uploaded CPython 2.7mu

perfmetrics-3.1.0-cp27-cp27m-win_amd64.whl (157.0 kB view hashes)

Uploaded CPython 2.7m Windows x86-64

perfmetrics-3.1.0-cp27-cp27m-win32.whl (151.5 kB view hashes)

Uploaded CPython 2.7m Windows x86

perfmetrics-3.1.0-cp27-cp27m-manylinux2010_x86_64.whl (312.1 kB view hashes)

Uploaded CPython 2.7m manylinux: glibc 2.12+ x86-64

perfmetrics-3.1.0-cp27-cp27m-manylinux1_x86_64.whl (312.1 kB view hashes)

Uploaded CPython 2.7m

perfmetrics-3.1.0-cp27-cp27m-macosx_10_14_x86_64.whl (161.7 kB view hashes)

Uploaded CPython 2.7m macOS 10.14+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page