Skip to main content

Prometheus metrics exporter for Starlette applications.

Project description

starlette_exporter

Prometheus exporter for Starlette and FastAPI

starlette_exporter collects basic metrics for Starlette and FastAPI based applications:

  • starlette_requests_total: a counter representing the total requests
  • starlette_request_duration_seconds: a histogram representing the distribution of request response times
  • starlette_requests_in_progress: a gauge that keeps track of how many concurrent requests are being processed

Metrics include labels for the HTTP method, the path, and the response status code.

starlette_requests_total{method="GET",path="/",status_code="200"} 1.0
starlette_request_duration_seconds_bucket{le="0.01",method="GET",path="/",status_code="200"} 1.0

Use the HTTP handler handle_metrics at path /metrics to expose a metrics endpoint to Prometheus.

Table of Contents

Usage

pip install starlette_exporter

Starlette

from starlette.applications import Starlette
from starlette_exporter import PrometheusMiddleware, handle_metrics

app = Starlette()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", openmetrics_handler)

...

FastAPI

from fastapi import FastAPI
from starlette_exporter import PrometheusMiddleware, handle_metrics

app = FastAPI()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", openmetrics_handler)

...

Options

app_name: Sets the value of the app_name label for exported metrics (default: starlette).

prefix: Sets the prefix of the exported metric names (default: starlette).

labels: Optional dict containing default labels that will be added to all metrics. The values can be either a static value or a callback function that retrieves a value from the Request object. See below for examples.

exemplars: Optional dict containing label/value pairs. The "value" should be a callback function that returns the desired value at runtime.

group_paths: setting this to True will populate the path label using named parameters (if any) in the router path, e.g. /api/v1/items/{item_id}. This will group requests together by endpoint (regardless of the value of item_id). This option may come with a performance hit for larger routers. Default is False, which will result in separate metrics for different URLs (e.g., /api/v1/items/42, /api/v1/items/43, etc.).

filter_unhandled_paths: setting this to True will cause the middleware to ignore requests with unhandled paths (in other words, 404 errors). This helps prevent filling up the metrics with 404 errors and/or intentially bad requests. Default is False.

buckets: accepts an optional list of numbers to use as histogram buckets. The default value is None, which will cause the library to fall back on the Prometheus defaults (currently [0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0]).

skip_paths: accepts an optional list of paths that will not collect metrics. The default value is None, which will cause the library to collect metrics on every requested path. This option is useful to avoid collecting metrics on health check, readiness or liveness probe endpoints.

always_use_int_status: accepts a boolean. The default value is False. If set to True the libary will attempt to convert the status_code value to an integer (e.g. if you are using HTTPStatus, HTTPStatus.OK will become 200 for all metrics).

optional_metrics: a list of pre-defined metrics that can be optionally added to the default metrics. The following optional metrics are available:

  • response_body_size: a counter that tracks the size of response bodies for each endpoint

For optional metric examples, see below.

Full example:

app.add_middleware(
  PrometheusMiddleware,
  app_name="hello_world",
  prefix='myapp',
  labels={
      "server_name": os.getenv("HOSTNAME"),
  }),
  group_paths=True,
  buckets=[0.1, 0.25, 0.5],
  skip_paths=['/health'],
  always_use_int_status=False),
  exemplars={"trace_id": get_trace_id}  # function that returns a trace id

Labels

The included metrics have built-in default labels such as app_name, method, path, and status_code. Additional default labels can be added by passing a dictionary to the labels arg to PrometheusMiddleware. Each label's value can be either a static value or, optionally, a callback function. The built-in default label names are reserved and cannot be reused.

If a callback function is used, it will receive the Request instance as its argument.

app.add_middleware(
  PrometheusMiddleware,
  labels={
     "service": "api",
     "env": os.getenv("ENV")
    }

Ensure that label names follow Prometheus naming conventions and that label values are constrained (see this writeup from Grafana on cardinality).

Label helpers

from_header(key: string, allowed_values: Optional[Iterable]): a convenience function for using a header value as a label.

allowed_values allows you to supply a list of allowed values. If supplied, header values not in the list will result in an empty string being returned. This allows you to constrain the label values, reducing the risk of excessive cardinality.

Do not use headers that could contain unconstrained values (e.g. user id) or user-supplied values.

from starlette_exporter import PrometheusMiddleware, from_header

app.add_middleware(
  PrometheusMiddleware,
  labels={
      "host": from_header("X-Internal-Org", allowed_values=("accounting", "marketing", "product"))
    }

Exemplars

Exemplars are used for labeling histogram observations or counter increments with a trace id. This allows adding trace ids to your charts (for example, latency graphs could include traces corresponding to various latency buckets).

To add exemplars to starlette_exporter metrics, pass a dict to the PrometheusMiddleware class with label as well as a callback function that returns a string (typically the current trace id).

Example:

# must use `handle_openmetrics` instead of `handle_metrics` for exemplars to appear in /metrics output.
from starlette_exporter import PrometheusMiddleware, handle_openmetrics

app.add_middleware(
  PrometheusMiddleware,
  exemplars={"trace_id": get_trace_id}  # supply your own callback function
)

app.add_route("/metrics", handle_openmetrics)

Exemplars are only supported by the openmetrics-text exposition format. A new handle_openmetrics handler function is provided (see above example).

For more information, see the Grafana exemplar documentation.

Optional metrics

Optional metrics are pre-defined metrics that can be added to the default metrics.

  • response_body_size: the size of response bodies returned, in bytes
  • request_body_size: the size of request bodies received, in bytes

Example:

from fastapi import FastAPI
from starlette_exporter import PrometheusMiddleware, handle_metrics
from starlette_exporter.optional_metrics import response_body_size, request_body_size

app = FastAPI()
app.add_middleware(PrometheusMiddleware, optional_metrics=[response_body_size, request_body_size])

Custom Metrics

starlette_exporter will export all the prometheus metrics from the process, so custom metrics can be created by using the prometheus_client API.

Example:

from prometheus_client import Counter
from starlette.responses import RedirectResponse

REDIRECT_COUNT = Counter("redirect_total", "Count of redirects", ["redirected_from"])

async def some_view(request):
    REDIRECT_COUNT.labels("some_view").inc()
    return RedirectResponse(url="https://example.com", status_code=302)

The new metric will now be included in the the /metrics endpoint output:

...
redirect_total{redirected_from="some_view"} 2.0
...

Multiprocess mode (gunicorn deployments)

Running starlette_exporter in a multiprocess deployment (e.g. with gunicorn) will need the PROMETHEUS_MULTIPROC_DIR env variable set, as well as extra gunicorn config.

For more information, see the Prometheus Python client documentation.

Developing

This package supports Python 3.6+.

git clone https://github.com/stephenhillier/starlette_exporter
cd starlette_exporter
pytest tests

License

Code released under the Apache License, Version 2.0.

Dependencies

https://github.com/prometheus/client_python (>= 0.12)

https://github.com/encode/starlette

Credits

Starlette - https://github.com/encode/starlette

FastAPI - https://github.com/tiangolo/fastapi

Flask exporter - https://github.com/rycus86/prometheus_flask_exporter

Alternate Starlette exporter - https://github.com/perdy/starlette-prometheus

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

starlette_exporter-0.15.1.tar.gz (15.9 kB view details)

Uploaded Source

Built Distribution

starlette_exporter-0.15.1-py3-none-any.whl (14.5 kB view details)

Uploaded Python 3

File details

Details for the file starlette_exporter-0.15.1.tar.gz.

File metadata

  • Download URL: starlette_exporter-0.15.1.tar.gz
  • Upload date:
  • Size: 15.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for starlette_exporter-0.15.1.tar.gz
Algorithm Hash digest
SHA256 3bde5d863effb26684210fe016a1ebf2b383efedf21a4b2f28585ec5064f4033
MD5 d47e3369c978becccd1db20f72fd4835
BLAKE2b-256 7c6d72fbe02fd9dec57c8990f466255082a26a7b298ccf8a6960f79f3286b746

See more details on using hashes here.

Provenance

File details

Details for the file starlette_exporter-0.15.1-py3-none-any.whl.

File metadata

File hashes

Hashes for starlette_exporter-0.15.1-py3-none-any.whl
Algorithm Hash digest
SHA256 24eeaef01f05ef973984704427f6e6a93d468f487b8b26ad77548d963affc9fe
MD5 1a7ec2ead7a2d897d2254e95cdad047a
BLAKE2b-256 0c285ef1e05bd24984d508e93d4a0319b7f63f8ce5d3c33dd1515507c024f76e

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page