Skip to main content

Instrument your FastAPI with Prometheus metrics

Project description

Prometheus FastAPI Instrumentator

PyPI version Maintenance downloads docs

release test branches codecov Code style: black

A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator from PyPI. Here is the fast track to get started with a preconfigured instrumentator:

from prometheus_fastapi_instrumentator import Instrumentator

Instrumentator().instrument(app).expose(app)

With this, your FastAPI is instrumented and metrics ready to be scraped. The sensible defaults give you:

  • Counter http_requests_total with handler, status and method. Total number of requests.
  • Summary http_request_size_bytes with handler. Added up total of the content lengths of all incoming requests. If the request has no valid content length, 0 bytes will be assumed.
  • Summary http_response_size_bytes with handler. Added up total of the content lengths of all outgoing responses. If the response has no valid content length, 0 bytes will be assumed.
  • Histogram http_request_duration_seconds with handler. Only a few buckets to keep cardinality low. Use it for aggregations by handler or SLI buckets.
  • Histogram http_request_duration_highr_seconds without any labels. Large number of buckets (>20) for accurate percentile calculations.

In addition, following behaviour is active:

  • Status codes are grouped into 2xx, 3xx and so on.
  • Requests without a matching template are grouped into the handler none.

If one of these presets does not suit your needs you can tweak behaviour or register your own metric handler with the instrumentator. Find out here how to do that.


Contents: Features | Advanced Usage | Creating the Instrumentator | Adding metrics | Creating new metrics | Perform instrumentation | Exposing endpoint | Documentation | Prerequesites | Development


Features

Beyond the fast track, this instrumentator is highly configurable and it is very easy to customize and adapt to your specific use case. Here is a list of some of these options you may opt-in to:

  • Regex patterns to ignore certain routes.
  • Completely ignore untemplated routes.
  • Control instrumentation and exposition with an env var.
  • Rounding of latencies to a certain decimal number.
  • Renaming of labels and the metric.

It also features a modular approach to metrics that should instrument all FastAPI endpoints. You can either choose from a set of already existing metrics or create your own. And every metric function by itself can be configured as well. You can see ready to use metrics here.

Advanced Usage

This chapter contains an example on the advanced usage of the Prometheus FastAPI Instrumentator to showcase most of it's features. Fore more concrete info check out the automatically generated documentation.

Creating the Instrumentator

We start by creating an instance of the Instrumentator. Notice the additional metrics import. This will come in handy later.

from prometheus_fastapi_instrumentator import Instrumentator, metrics

instrumentator = Instrumentator(
    should_group_status_codes=False,
    should_ignore_untemplated=True,
    should_respect_env_var=True,
    excluded_handlers=[".*admin.*", "/metrics"],
    env_var_name="ENABLE_METRICS",
)

Unlike in the fast track example, now the instrumentation and exposition will only take place if the environment variable ENABLE_METRICS is true at run-time. This can be helpful in larger deployments with multiple services depending on the same base FastAPI.

Adding metrics

Let's say we also want to instrument the size of requests and responses. For this we use the add() method. This method does nothing more than taking a function and adding it to a list. Then during run-time every time FastAPI handles a request all functions in this list will be called while giving them a single argument that stores useful information like the request and response objects. If no add() at all is used, the default metric gets added in the background. This is what happens in the fast track example.

All instrumentation functions are stored as closures in the metrics module. Closures come in handy here because it allows us to configure the functions within.

instrumentator.add(metrics.latency(buckets=(1, 2, 3,)))

This simply adds the metric you also get in the fast track example with a modified buckets argument. But we would also like to record the size of all requests and responses.

instrumentator.add(
    metrics.request_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="a",
        metric_subsystem="b",
    )
).add(
    metrics.response_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="namespace",
        metric_subsystem="subsystem",
    )
)

You can add as many metrics you like to the instrumentator.

Creating new metrics

As already mentioned, it is possible to create custom functions to pass on to add(). This is also how the default metrics are implemented. The documentation and code (here)[] is helpful to get an overview.

The basic idea is that the instrumentator creates an info object that contains everything necessary for instrumentation based on the configuration of the instrumentator. This includes the raw request and response objects but also the modified handler, grouped status code and duration. Next, all registered instrumentation functions are called. They get info as their single argument.

Let's say we want to count the number of times a certain language has been requested.

def http_requested_languages_total() -> Callable[[Info], None]:
    METRIC = Counter(
        "http_requested_languages_total", 
        "Number of times a certain language has been requested.", 
        labelnames=("langs",)
    )

    def instrumentation(info: Info) -> None:
        langs = set()
        lang_str = info.request.headers["Accept-Language"]
        for element in lang_str.split(",")
            element = element.split(";")[0].strip().lower()
            langs.add(element)
        for language in langs:
            METRIC.labels(language).inc()

    return instrumentation

The function http_requested_languages_total is used for persistent elements that are stored between all instrumentation executions (for example the metric instance itself). Next comes the closure. This function must adhere to the shown interface. It will always get an Info object that contains the request, response and a few other modified informations. For example the (grouped) status code or the handler. Finally, the closure is returned.

Important: The response object inside info can either be the response object or None. In addition, errors thrown in the handler are not caught by the instrumentator. I recommend to check the documentation and/or the source code before creating your own metrics.

To use it, we hand over the closure to the instrumentator object.

instrumentator.add(http_requested_languages_total())

Perform instrumentation

Up to this point, the FastAPI has not been touched at all. Everything has been stored in the instrumentator only. To actually register the instrumentation with FastAPI, the instrument() method has to be called.

instrumentator.instrument(app)

Notice that this will do nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Exposing endpoint

To expose an endpoint for the metrics either follow Prometheus Python Client and add the endpoint manually to the FastAPI or serve it on a separate server. You can also use the included expose method. It will add an endpoint to the given FastAPI.

instrumentator.expose(app, include_in_schema=False)

Notice that this will to nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Documentation

The documentation is hosted here.

Prerequesites

  • python = "^3.6" (tested with 3.6 and 3.8)
  • fastapi = ">=0.38.1, <=1.0.0" (tested with 0.38.1 and 0.61.0)
  • prometheus-client = "^0.8.0" (tested with 0.8.0)

Development

Developing and building this package on a local machine requires Python Poetry. I recommend to run Poetry in tandem with Pyenv. Once the repository is cloned, run poetry install and poetry shell. From here you may start the IDE of your choice.

Take a look at the Makefile or workflows on how to test this package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file prometheus-fastapi-instrumentator-5.2.2.tar.gz.

File metadata

File hashes

Hashes for prometheus-fastapi-instrumentator-5.2.2.tar.gz
Algorithm Hash digest
SHA256 8f7c99659844a434ae20a7d2d5a833bc37db4c62f45e187db412eadbc7bab29c
MD5 0e02b4dd4dde3ba724fbc6c71330b62b
BLAKE2b-256 2ef110609271a235ec86a3df89d9cce2cb95918580f5281ff5fe8ee266eb2441

See more details on using hashes here.

Provenance

File details

Details for the file prometheus_fastapi_instrumentator-5.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for prometheus_fastapi_instrumentator-5.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 dc24b6eb249d82b5866445d2fb5e97863702161ce246e4159a251eb06a23dc28
MD5 b603bd6fea57d83a9223a33a1a49f6a3
BLAKE2b-256 6e0c328734ee7c7fb3e010b0ab5b2dbb12b72852a95f58dc963f0146724b9f8c

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page