Skip to main content

Python gRPC Prometheus Interceptors, py-grpc-prometheus fork

Project description

grpc-prometheus-metrics

Fork of py-grpc-prometheus: https://github.com/lchenn/py-grpc-prometheus

Instrument library to provide prometheus metrics similar to:

Status

Currently, the library has the parity metrics with the Java and Go library.

Server side:

  • grpc_server_started_total
  • grpc_server_handled_total
  • grpc_server_msg_received_total
  • grpc_server_msg_sent_total
  • grpc_server_handling_seconds

Client side:

  • grpc_client_started_total
  • grpc_client_handled_total
  • grpc_client_msg_received_total
  • grpc_client_msg_sent_total
  • grpc_client_handling_seconds
  • grpc_client_msg_recv_handling_seconds
  • grpc_client_msg_send_handling_seconds

How to use

pip install py-grpc-prometheus

Client side:

Client metrics monitoring is done by intercepting the gPRC channel.

import grpc
from grpc_prometheus_metrics.prometheus_client_interceptor import PromClientInterceptor

channel = grpc.intercept_channel(grpc.insecure_channel('server:6565'),
                                         PromClientInterceptor())
# Start an end point to expose metrics.
start_http_server(metrics_port)

Server side:

Server metrics are exposed by adding the interceptor when the gRPC server is started. Take a look at tests/integration/hello_world/hello_world_client.py for the complete example.

import grpc
from concurrent import futures
from grpc_prometheus_metrics.prometheus_server_interceptor import PromServerInterceptor
from prometheus_client import start_http_server

Start the gRPC server with the interceptor, take a look at tests/integration/hello_world/hello_world_server.py for the complete example.

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10),
                         interceptors=(PromServerInterceptor(),))
# Start an end point to expose metrics.
start_http_server(metrics_port)

Histograms

Prometheus histograms are a great way to measure latency distributions of your RPCs. However, since it is bad practice to have metrics of high cardinality the latency monitoring metrics are disabled by default. To enable them please call the following in your interceptor initialization code:

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10),
                     interceptors=(PromServerInterceptor(enable_handling_time_histogram=True),))

After the call completes, its handling time will be recorded in a Prometheus histogram variable grpc_server_handling_seconds. The histogram variable contains three sub-metrics:

  • grpc_server_handling_seconds_count - the count of all completed RPCs by status and method
  • grpc_server_handling_seconds_sum - cumulative time of RPCs by status and method, useful for calculating average handling times
  • grpc_server_handling_seconds_bucket - contains the counts of RPCs by status and method in respective handling-time buckets. These buckets can be used by Prometheus to estimate SLAs (see here)

Server Side:

  • enable_handling_time_histogram: Enables 'grpc_server_handling_seconds'

Client Side:

  • enable_client_handling_time_histogram: Enables 'grpc_client_handling_seconds'
  • enable_client_stream_receive_time_histogram: Enables 'grpc_client_msg_recv_handling_seconds'
  • enable_client_stream_send_time_histogram: Enables 'grpc_client_msg_send_handling_seconds'

Legacy metrics:

Metric names have been updated to be in line with those from https://github.com/grpc-ecosystem/go-grpc-prometheus.

The legacy metrics are:

server side:

  • grpc_server_started_total
  • grpc_server_handled_total
  • grpc_server_handled_latency_seconds
  • grpc_server_msg_received_total
  • grpc_server_msg_sent_total

client side:

  • grpc_client_started_total
  • grpc_client_completed
  • grpc_client_completed_latency_seconds
  • grpc_client_msg_sent_total
  • grpc_client_msg_received_total

In order to be able to use these legacy metrics for backwards compatibility, the legacy flag can be set to True when initialising the server/client interceptors

For example, to enable the server side legacy metrics:

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10),
                     interceptors=(PromServerInterceptor(legacy=True),))

How to run and test

make initialize-development
make test

TODO:

Reference

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grpc_prometheus_metrics-0.0.5.tar.gz (11.8 kB view details)

Uploaded Source

Built Distribution

grpc_prometheus_metrics-0.0.5-py3-none-any.whl (15.7 kB view details)

Uploaded Python 3

File details

Details for the file grpc_prometheus_metrics-0.0.5.tar.gz.

File metadata

  • Download URL: grpc_prometheus_metrics-0.0.5.tar.gz
  • Upload date:
  • Size: 11.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.13 Linux/6.2.0-1019-azure

File hashes

Hashes for grpc_prometheus_metrics-0.0.5.tar.gz
Algorithm Hash digest
SHA256 e526d621155929b65e9509eade2845964886d2edb5b18a413bfc55dcbac72c51
MD5 218f674cc34c2972040228064858dbfc
BLAKE2b-256 5fcfd9aa501a87bf7921fc6e6c04b61c7bf94614e4311ca77308d21b2b4e48b7

See more details on using hashes here.

File details

Details for the file grpc_prometheus_metrics-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for grpc_prometheus_metrics-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c9f6681107381ebfa8cc7059b0a39a333a3cf323b7738d13fe3b4be7fa49dafd
MD5 3377d9a5f3fbfaf159049a7806c3578f
BLAKE2b-256 91c47cfc463443a420a581aa2d2d798366625c9f5d404bbc71d4853483707b50

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page