Skip to main content

Easily add metrics to your system – and actually understand them using automatically customized Prometheus queries

Project description

GitHub_headerImage

Tests Discord Shield

A Python port of the Rust autometrics-rs library

Autometrics is a library that exports a decorator that makes it easy to understand the error rate, response time, and production usage of any function in your code. Jump straight from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.

Autometrics for Python provides:

  1. A decorator that can create Prometheus metrics for your functions and class methods throughout your code base.
  2. A helper function that will write corresponding Prometheus queries for you in a Markdown file.

See Why Autometrics? for more details on the ideas behind autometrics.

Features

  • autometrics decorator instruments any function or class method to track the most useful metrics
  • 💡 Writes Prometheus queries so you can understand the data generated without knowing PromQL
  • 🔗 Create links to live Prometheus charts directly into each function's docstring
  • 🔍 Identify commits that introduced errors or increased latency
  • 🚨 Define alerts using SLO best practices directly in your source code
  • 📊 Grafana dashboards work out of the box to visualize the performance of instrumented functions & SLOs
  • ⚙️ Configurable metric collection library (opentelemetry or prometheus)
  • 📍 Attach exemplars to connect metrics with traces
  • ⚡ Minimal runtime overhead

Using autometrics-py

  • Set up a Prometheus instance
  • Configure prometheus to scrape your application (check our instructions if you need help)
  • Include a .env file with your prometheus endpoint PROMETHEUS_URL=your endpoint. If this is not defined, the default endpoint will be http://localhost:9090/
  • pip install autometrics
  • Import the library in your code and use the decorator for any function:
from autometrics import autometrics

@autometrics
def sayHello:
   return "hello"
  • You can also track the number of concurrent calls to a function by using the track_concurrency argument: @autometrics(track_concurrency=True). Note: currently only supported by the prometheus tracker.

  • To access the PromQL queries for your decorated functions, run help(yourfunction) or print(yourfunction.__doc__).

  • To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing the VSCode extension.

Note that we cannot support tooltips without a VSCode extension due to behavior of the static analyzer used in VSCode.

Dashboards

Autometrics provides Grafana dashboards that will work for any project instrumented with the library.

Alerts / SLOs

Autometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.

In order to receive alerts you need to add a set of rules to your Prometheus set up. You can find out more about those rules here: Prometheus alerting rules. Once added, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.

To use autometrics SLOs and alerts, create one or multiple Objectives based on the function(s) success rate and/or latency, as shown below. The Objective can be passed as an argument to the autometrics decorator to include the given function in that objective.

from autometrics import autometrics
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile

# Create an objective for a high success rate
API_SLO_HIGH_SUCCESS = Objective(
    "My API SLO for High Success Rate (99.9%)",
    success_rate=ObjectivePercentile.P99_9,
)

# Or you can also create an objective for low latency
API_SLO_LOW_LATENCY = Objective(
    "My API SLO for Low Latency (99th percentile < 250ms)",
    latency=(ObjectiveLatency.Ms250, ObjectivePercentile.P99),
)

@autometrics(objective=API_SLO_HIGH_SUCCESS)
def api_handler():
  # ...

Autometrics keeps track of instrumented functions calling each other. If you have a function that calls another function, metrics for later will include caller label set to the name of the autometricised function that called it.

Settings

Autometrics makes use of a number of environment variables to configure its behavior. All of them are also configurable with keyword arguments to the init function.

  • tracker - Configure the package that autometrics will use to produce metrics. Default is opentelemetry, but you can also use prometheus. Look in pyproject.toml for the corresponding versions of packages that will be used.
  • histogram_buckets - Configure the buckets used for latency histograms. Default is [0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0].
  • enable_exemplars - Enable exemplar collection. Default is False.
  • service_name - Configure the service name.
  • version, commit, branch - Used to configure build_info.

Identifying commits that introduced problems

NOTE - As of writing, build_info will not work correctly when using the default tracker (AUTOMETRICS_TRACKER=OPEN_TELEMETRY). This will be fixed once the following PR is merged on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306

autometrics-py will track support for build_info using the OpenTelemetry tracker via #38

Autometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.

It uses a separate metric (build_info) to track the version and, optionally, git commit of your service. It then writes queries that group metrics by the version, commit and branch labels so you can spot correlations between those and potential issues. Configure the labels by setting the following environment variables:

Label Run-Time Environment Variables Default value
version AUTOMETRICS_VERSION ""
commit AUTOMETRICS_COMMIT or COMMIT_SHA ""
branch AUTOMETRICS_BRANCH or BRANCH_NAME ""

This follows the method outlined in Exposing the software version to Prometheus.

Service name

All metrics produced by Autometrics have a label called service.name (or service_name when exported to Prometheus) attached to identify the logical service they are part of.

You may want to override the default service name, for example if you are running multiple instances of the same code base as separate services and want to differentiate between the metrics produced by each one.

The service name is loaded from the following environment variables, in this order:

  1. AUTOMETRICS_SERVICE_NAME (at runtime)
  2. OTEL_SERVICE_NAME (at runtime)
  3. First part of __package__ (at runtime)

Exemplars

NOTE - As of writing, exemplars aren't supported by the default tracker (AUTOMETRICS_TRACKER=OPEN_TELEMETRY). You can track the progress of this feature here: https://github.com/autometrics-dev/autometrics-py/issues/41

Exemplars are a way to associate a metric sample to a trace by attaching trace_id and span_id to it. You can then use this information to jump from a metric to a trace in your tracing system (for example Jaeger). If you have an OpenTelemetry tracer configured, autometrics will automatically pick up the current span from it.

To use exemplars, you need to first switch to a tracker that supports them by setting AUTOMETRICS_TRACKER=prometheus and enable exemplar collection by setting AUTOMETRICS_EXEMPLARS=true. You also need to enable exemplars in Prometheus by launching Prometheus with the --enable-feature=exemplar-storage flag.

Exporting metrics

After collecting metrics with Autometrics, you need to export them to Prometheus. You can either add a separate route to your server and use the generate_latest function from the prometheus_client package, or you can use the start_http_server function from the same package to start a separate server that will expose the metrics. Autometrics also re-exports the start_http_server function with a preselected port 9464 for compatibility with other Autometrics packages.

Development of the package

This package uses poetry as a package manager, with all dependencies separated into three groups:

  • root level dependencies, required
  • dev, everything that is needed for development or in ci
  • examples, dependencies of everything in examples/ directory

By default, poetry will only install required dependencies, if you want to run examples, install using this command:

poetry install --with examples

Code in this repository is:

  • formatted using black.
  • contains type definitions (which are linted by pyright)
  • tested using pytest

In order to run these tools locally you have to install them, you can install them using poetry:

poetry install --with dev

After that you can run the tools individually

# Formatting using black
poetry run black .
# Lint using pyright
poetry run pyright
# Run the tests using pytest
poetry run pytest
# Run a single test, and clear the cache
poetry run pytest --cache-clear -k test_tracker

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autometrics-0.9.tar.gz (25.1 kB view hashes)

Uploaded Source

Built Distribution

autometrics-0.9-py3-none-any.whl (23.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page