Skip to main content

Easily add metrics to your system – and actually understand them using automatically customized Prometheus queries

Project description

GitHub_headerImage

Discord Shield

A Python port of the Rust autometrics-rs library

Autometrics is a library that exports a decorator that makes it easy to understand the error rate, response time, and production usage of any function in your code. Jump straight from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.

Autometrics for Python provides:

  1. A decorator that can create Prometheus metrics for your functions and class methods throughout your code base.
  2. A helper function that will write corresponding Prometheus queries for you in a Markdown file.

See Why Autometrics? for more details on the ideas behind autometrics.

Features

  • autometrics decorator instruments any function or class method to track the most useful metrics
  • 💡 Writes Prometheus queries so you can understand the data generated without knowing PromQL
  • 🔗 Create links to live Prometheus charts directly into each function's docstring
  • 🔍 Identify commits that introduced errors or increased latency
  • 🚨 Define alerts using SLO best practices directly in your source code
  • 📊 Grafana dashboards work out of the box to visualize the performance of instrumented functions & SLOs
  • ⚙️ Configurable metric collection library (opentelemetry, prometheus, or metrics)
  • 📍 Attach exemplars to connect metrics with traces
  • ⚡ Minimal runtime overhead

Using autometrics-py

  • Set up a Prometheus instance
  • Configure prometheus to scrape your application (check our instructions if you need help)
  • Include a .env file with your prometheus endpoint PROMETHEUS_URL=your endpoint. If this is not defined, the default endpoint will be http://localhost:9090/
  • pip install autometrics
  • Import the library in your code and use the decorator for any function:
from autometrics import autometrics

@autometrics
def sayHello:
   return "hello"
  • You can also track the number of concurrent calls to a function by using the track_concurrency argument: @autometrics(track_concurrency=True). Note: currently only supported by the prometheus tracker.

  • To access the PromQL queries for your decorated functions, run help(yourfunction) or print(yourfunction.__doc__).

  • To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing the VSCode extension.

Note that we cannot support tooltips without a VSCode extension due to behavior of the static analyzer used in VSCode.

Dashboards

Autometrics provides Grafana dashboards that will work for any project instrumented with the library.

Alerts / SLOs

Autometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.

In order to receive alerts you need to add a set of rules to your Prometheus set up. You can find out more about those rules here: Prometheus alerting rules. Once added, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.

To use autometrics SLOs and alerts, create one or multiple Objectives based on the function(s) success rate and/or latency, as shown below. The Objective can be passed as an argument to the autometrics decorator to include the given function in that objective.

from autometrics import autometrics
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile

# Create an objective for a high success rate
API_SLO_HIGH_SUCCESS = Objective(
    "My API SLO for High Success Rate (99.9%)",
    success_rate=ObjectivePercentile.P99_9,
)

# Or you can also create an objective for low latency
API_SLO_LOW_LATENCY = Objective(
    "My API SLO for Low Latency (99th percentile < 250ms)",
    latency=(ObjectiveLatency.Ms250, ObjectivePercentile.P99),
)

@autometrics(objective=API_SLO_HIGH_SUCCESS)
def api_handler():
  # ...

Autometrics by default will try to store information on which function calls a decorated function. As such you may want to place the autometrics in the top/first decorator, as otherwise you may get inner or wrapper as the caller function.

So instead of writing:

from functools import wraps
from typing import Any, TypeVar, Callable

R = TypeVar("R")

def noop(func: Callable[..., R]) -> Callable[..., R]:
    """A noop decorator that does nothing."""

    @wraps(func)
    def inner(*args: Any, **kwargs: Any) -> Any:
        return func(*args, **kwargs)

    return inner

@noop
@autometrics
def api_handler():
  # ...

You may want to switch the order of the decorator

@autometrics
@noop
def api_handler():
  # ...

Metrics Libraries

Configure the package that autometrics will use to produce metrics with the AUTOMETRICS_TRACKER environment variable.

  • opentelemetry - Enabled by default, can also be explicitly set using the env var AUTOMETRICS_TRACKER="OPEN_TELEMETERY". Look in pyproject.toml for the versions of the OpenTelemetry packages that will be used.
  • prometheus - Can be set using the env var AUTOMETRICS_TRACKER="PROMETHEUS". Look in pyproject.toml for the version of the prometheus-client package that will be used.

Identifying commits that introduced problems

NOTE - As of writing, build_info will not work correctly when using the default tracker (AUTOMETRICS_TRACKER=OPEN_TELEMETRY). This will be fixed once the following PR is merged on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306

autometrics-py will track support for build_info using the OpenTelemetry tracker via #38

Autometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.

It uses a separate metric (build_info) to track the version and, optionally, git commit of your service. It then writes queries that group metrics by the version, commit and branch labels so you can spot correlations between those and potential issues. Configure the labels by setting the following environment variables:

Label Run-Time Environment Variables Default value
version AUTOMETRICS_VERSION ""
commit AUTOMETRICS_COMMIT or COMMIT_SHA ""
branch AUTOMETRICS_BRANCH or BRANCH_NAME ""

This follows the method outlined in Exposing the software version to Prometheus.

Exemplars

NOTE - As of writing, exemplars aren't supported by the default tracker (AUTOMETRICS_TRACKER=OPEN_TELEMETRY). You can track the progress of this feature here: https://github.com/autometrics-dev/autometrics-py/issues/41

Exemplars are a way to associate a metric sample to a trace by attaching trace_id and span_id to it. You can then use this information to jump from a metric to a trace in your tracing system (for example Jaeger). If you have an OpenTelemetry tracer configured, autometrics will automatically pick up the current span from it.

To use exemplars, you need to first switch to a tracker that supports them by setting AUTOMETRICS_TRACKER=prometheus and enable exemplar collection by setting AUTOMETRICS_EXEMPLARS=true. You also need to enable exemplars in Prometheus by launching Prometheus with the --enable-feature=exemplar-storage flag.

Development of the package

This package uses poetry as a package manager, with all dependencies separated into three groups:

  • root level dependencies, required
  • dev, everything that is needed for development or in ci
  • examples, dependencies of everything in examples/ directory

By default, poetry will only install required dependencies, if you want to run examples, install using this command:

poetry install --with examples

Code in this repository is:

  • formatted using black.
  • contains type definitions (which are linted by pyright)
  • tested using pytest

In order to run these tools locally you have to install them, you can install them using poetry:

poetry install --with dev

After that you can run the tools individually

# Formatting using black
poetry run black .
# Lint using pyright
poetry run pyright
# Run the tests using pytest
poetry run pytest
# Run a single test, and clear the cache
poetry run pytest --cache-clear -k test_tracker

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autometrics-0.6.tar.gz (19.6 kB view hashes)

Uploaded Source

Built Distribution

autometrics-0.6-py3-none-any.whl (21.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page