Skip to main content

Performance metrics for Pyramid using StatsD

Project description

Performance metrics for Pyramid using StatsD. The project aims at providing ways to instrument a Pyramid application in the least intrusive way.


Install using setuptools, e.g. (within a virtualenv):

$ pip install pyramid_metrics


Once pyramid_metrics is installed, you must use the config.include mechanism to include it into your Pyramid project’s configuration. In your Pyramid project’s

config = Configurator(.....)

Alternately you can use the pyramid.includes configuration value in your .ini file:

pyramid.includes = pyramid_metrics


Pyramid_metrics configuration (values are defaults):

[app:myapp] = localhost
metrics.port = 8125

metrics.prefix = application.stage

metrics.route_performance = true

Route performance

If enabled, the route performance feature will time the request processing. By using the StatsD Timer type metric, pre-aggregation will provide information on latency, rate and total number. The information is sent two times: per route and globally.

The key name is composed of the route name, the HTTP method and the outcome (as HTTP status code or ‘exc’ for exception).

  • Global key request.<HTTP_METHOD>.<STATUS_CODE_OR_EXC>

  • Per route key route.<ROUTE_NAME>.request.<HTTP_METHOD>.<STATUS_CODE_OR_EXC>



StatsD type:

# Increment a counter named cache.hit by 1

# Increment by N
request.metrics.incr(('', count=len(cacheresult)))

# Stat names can be composed from list or tuple
request.metrics.incr(('cache', cache_action))


StatsD type:

# Set the number of SQL connections to 8
request.metrics.gauge('sql.connections', 8)

# Increase the value of the metrics by some amount
request.metrics.gauge('network.egress', 34118, delta=True)


StatsD type:

# Simple timing
time_in_ms = requests.get('').elapsed.microseconds/1000
request.metrics.timing('net.example.responsetime', time_in_ms)

# Using the time marker mechanism

# Measure different outcome
    # Send measure to key 'something_slow.error'
    request.metrics.marker_stop('something_slow', suffix='error')
    # Send measure to key 'something_slow.ok'
    request.metrics.marker_stop('something_slow', suffix='ok')

# Using the context manager
with request.metrics.timer(['longprocess', processname]):
   # Send measure to 'longprocess.foobar' or 'longprocess.foobar.exc'

Currently implemented

  • Collection utility as a request method

  • Ability to send metrics per Pyramid route

  • Simple time marker mechanism

  • Simple counter

  • Context manager for Timing metric type


  • Full StatsD metric types

  • Extensions for automatic metrology (SQLAlchemy, MongoDB, Requests…)

  • Whitelist/blacklist of metrics

  • Time allocation per subsystem (using the time marker mechanism)


  • The general error policy is: always failsafe. Pyramid_metrics should NEVER break your application.

  • The DNS resolution is done during configuration to avoid recurring latencies.


Run tests

The tests are run by nose and all dependencies are in requirements-test.txt.

$ pip install -r requirements-test

$ nosetests

Run tests with tox

$ pip install tox

$ tox          # Run on python 2.7 and python 3.4

$ tox -e py34  # Run on python 3.4 only


  • Pior Bastida (@pior)

  • Philippe Gauthier (@deuxpi)

  • Hadrien David (@hadrien)

  • Jay R. Wren (@jrwren)

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyramid_metrics-0.3.1.tar.gz (10.7 kB view hashes)

Uploaded source

Built Distribution

pyramid_metrics-0.3.1-py3-none-any.whl (14.7 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page