Skip to main content

Send performance metrics about Python code to Statsd

Project description

perfmetrics

The perfmetrics package provides a simple way to add software performance metrics to Python libraries and applications. Use perfmetrics to find the true bottlenecks in a production application.

The perfmetrics package is a client of the Statsd daemon by Etsy, which is in turn a client of Graphite (specifically, the Carbon daemon). Because the perfmetrics package sends UDP packets to Statsd, perfmetrics adds no I/O delays to applications and little CPU overhead. It can work equally well in threaded (synchronous) or event-driven (asynchronous) software.

Complete documentation is hosted at https://perfmetrics.readthedocs.io

Latest release Supported Python versions Travis CI Build Status Code Coverage Documentation Status

Usage

Use the @metric and @metricmethod decorators to wrap functions and methods that should send timing and call statistics to Statsd. Add the decorators to any function or method that could be a bottleneck, including library functions.

Caution!

These decorators are generic and cause the actual function signature to be lost, replaced with *args, **kwargs. This can break certain types of introspection, including zope.interface validation. As a workaround, setting the environment variable PERFMETRICS_DISABLE_DECORATOR before importing perfmetrics or code that uses it will cause @perfmetrics.metric, @perfmetrics.metricmethod, @perfmetrics.Metric(...) and @perfmetrics.MetricMod(...) to return the original function unchanged.

Sample:

from perfmetrics import metric
from perfmetrics import metricmethod

@metric
def myfunction():
    """Do something that might be expensive"""

class MyClass(object):
    @metricmethod
    def mymethod(self):
        """Do some other possibly expensive thing"""

Next, tell perfmetrics how to connect to Statsd. (Until you do, the decorators have no effect.) Ideally, either your application should read the Statsd URI from a configuration file at startup time, or you should set the STATSD_URI environment variable. The example below uses a hard-coded URI:

from perfmetrics import set_statsd_client
set_statsd_client('statsd://localhost:8125')

for i in xrange(1000):
    myfunction()
    MyClass().mymethod()

If you run that code, it will fire 2000 UDP packets at port 8125. However, unless you have already installed Graphite and Statsd, all of those packets will be ignored and dropped. Dropping is a good thing: you don’t want your production application to fail or slow down just because your performance monitoring system is stopped or not working.

Install Graphite and Statsd to receive and graph the metrics. One good way to install them is the graphite_buildout example at github, which installs Graphite and Statsd in a custom location without root access.

Pyramid and WSGI

If you have a Pyramid app, you can set the statsd_uri for each request by including perfmetrics in your configuration:

config = Configuration(...)
config.include('perfmetrics')

Also add a statsd_uri setting such as statsd://localhost:8125. Once configured, the perfmetrics tween will set up a Statsd client for the duration of each request. This is especially useful if you run multiple apps in one Python interpreter and you want a different statsd_uri for each app.

Similar functionality exists for WSGI apps. Add the app to your Paste Deploy pipeline:

[statsd]
use = egg:perfmetrics#statsd
statsd_uri = statsd://localhost:8125

[pipeline:main]
pipeline =
    statsd
    egg:myapp#myentrypoint

Threading

While most programs send metrics from any thread to a single global Statsd server, some programs need to use a different Statsd server for each thread. If you only need a global Statsd server, use the set_statsd_client function at application startup. If you need to use a different Statsd server for each thread, use the statsd_client_stack object in each thread. Use the push, pop, and clear methods.

Graphite Tips

Graphite stores each metric as a time series with multiple resolutions. The sample graphite_buildout stores 10 second resolution for 48 hours, 1 hour resolution for 31 days, and 1 day resolution for 5 years. To produce a coarse grained value from a fine grained value, Graphite computes the mean value (average) for each time span.

Because Graphite computes mean values implicitly, the most sensible way to treat counters in Graphite is as a “hits per second” value. That way, a graph can produce correct results no matter which resolution level it uses.

Treating counters as hits per second has unfortunate consequences, however. If some metric sees a 1000 hit spike in one second, then falls to zero for at least 9 seconds, the Graphite chart for that metric will show a spike of 100, not 1000, since Graphite receives metrics every 10 seconds and the spike looks to Graphite like 100 hits per second over a 10 second period.

If you want your graph to show 1000 hits rather than 100 hits per second, apply the Graphite hitcount() function, using a resolution of 10 seconds or more. The hitcount function converts per-second values to approximate raw hit counts. Be sure to provide a resolution value large enough to be represented by at least one pixel width on the resulting graph, otherwise Graphite will compute averages of hit counts and produce a confusing graph.

It usually makes sense to treat null values in Graphite as zero, though that is not the default; by default, Graphite draws nothing for null values. You can turn on that option for each graph.

CHANGES

3.0.0 (2019-09-03)

  • Drop support for EOL Python 2.6, 3.2, 3.3 and 3.4.
  • Add support for Python 3.5, 3.6, and 3.7.
  • Compile the performance-sensitive parts with Cython, leading to a 10-30% speed improvement. See https://github.com/zodb/perfmetrics/issues/17.
  • Make decorated functions and methods configurable at runtime, not just compile time. See https://github.com/zodb/perfmetrics/issues/11.
  • Include support for testing applications instrumented with perfmetrics in perfmetrics.testing. This was previously release externally as nti.fakestatsd. See https://github.com/zodb/perfmetrics/issues/9.
  • Read the PERFMETRICS_DISABLE_DECORATOR environment variable when perfmetrics is imported, and if it is set, make the decorators @metric, @metricmethod, @Metric(...) and @MetricMod(...) return the function unchanged. This can be helpful for certain kinds of introspection tests. See https://github.com/zodb/perfmetrics/issues/15

2.0 (2013-12-10)

  • Added the @MetricMod decorator, which changes the name of metrics in a given context. For example, @MetricMod('xyz.%s') adds a prefix.
  • Removed the “gauge suffix” feature. It was unnecessarily confusing.
  • Timing metrics produced by @metric, @metricmethod, and @Metric now have a “.t” suffix by default to avoid naming conflicts.

1.0 (2012-10-09)

  • Added ‘perfmetrics.tween’ and ‘perfmetrics.wsgi’ stats for measuring request timing and counts.

0.9.5 (2012-09-22)

  • Added an optional Pyramid tween and a similar WSGI filter app that sets up the Statsd client for each request.

0.9.4 (2012-09-08)

  • Optimized the use of reduced sample rates.

0.9.3 (2012-09-08)

  • Support the STATSD_URI environment variable.

0.9.2 (2012-09-01)

  • Metric can now be used as either a decorator or a context manager.
  • Made the signature of StatsdClient more like James Socol’s StatsClient.

0.9.1 (2012-09-01)

  • Fixed package metadata.

0.9 (2012-08-31)

  • Initial release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for perfmetrics, version 3.0.0
Filename, size File type Python version Upload date Hashes
Filename, size perfmetrics-3.0.0-cp27-cp27m-win32.whl (135.1 kB) File type Wheel Python version cp27 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp27-cp27m-win_amd64.whl (141.5 kB) File type Wheel Python version cp27 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp35-cp35m-win32.whl (136.0 kB) File type Wheel Python version cp35 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp35-cp35m-win_amd64.whl (143.6 kB) File type Wheel Python version cp35 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp36-cp36m-win32.whl (136.5 kB) File type Wheel Python version cp36 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp36-cp36m-win_amd64.whl (143.8 kB) File type Wheel Python version cp36 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp37-cp37m-macosx_10_14_x86_64.whl (145.8 kB) File type Wheel Python version cp37 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp37-cp37m-win32.whl (136.5 kB) File type Wheel Python version cp37 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0-cp37-cp37m-win_amd64.whl (144.0 kB) File type Wheel Python version cp37 Upload date Hashes View hashes
Filename, size perfmetrics-3.0.0.tar.gz (103.6 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page