Skip to main content

Export Prometheus metrics generated from SQL queries

Project description

query-exporter logo

Export Prometheus metrics from SQL queries

Latest Version Build Status PyPI Downloads Docker Pulls Snap Package

query-exporter is a Prometheus exporter which allows collecting metrics from database queries, at specified time intervals.

It uses SQLAlchemy to connect to different database engines, including PostgreSQL, MySQL, Oracle and Microsoft SQL Server.

Each query can be run on multiple databases, and update multiple metrics.

The application is simply run as:

query-exporter

which will look for a config.yaml configuration file in the current directory, containing the definitions of the databases to connect and queries to perform to update metrics. The configuration file can be overridden by passing the --config option (or setting the QE_CONFIG environment variable). The option can be provided multiple times to pass partial configuration files, the resulting configuration will be the merge of the content of each top-level section (databases, metrics, queries).

A sample configuration file for the application looks like this:

databases:
  db1:
    dsn: sqlite://
    connect-sql:
      - PRAGMA application_id = 123
      - PRAGMA auto_vacuum = 1
    labels:
      region: us1
      app: app1
  db2:
    dsn: sqlite://
    labels:
      region: us2
      app: app1

metrics:
  metric1:
    type: gauge
    description: A sample gauge
  metric2:
    type: summary
    description: A sample summary
    labels: [l1, l2]
    expiration: 24h
  metric3:
    type: histogram
    description: A sample histogram
    buckets: [10, 20, 50, 100, 1000]
  metric4:
    type: enum
    description: A sample enum
    states: [foo, bar, baz]

queries:
  query1:
    interval: 5
    databases: [db1]
    metrics: [metric1]
    sql: SELECT random() / 1000000000000000 AS metric1
  query2:
    interval: 20
    timeout: 0.5
    databases: [db1, db2]
    metrics: [metric2, metric3]
    sql: |
      SELECT abs(random() / 1000000000000000) AS metric2,
             abs(random() / 10000000000000000) AS metric3,
             "value1" AS l1,
             "value2" AS l2
  query3:
    schedule: "*/5 * * * *"
    databases: [db2]
    metrics: [metric3, metric4]
    sql: |
      SELECT value FROM (
        SELECT "foo" AS metric4 UNION
        SELECT "bar" AS metric3 UNION
        SELECT "baz" AS metric4
      )
      ORDER BY random()
      LIMIT 1

See the configuration file format documentation for complete details on availble configuration options.

Exporter options

The exporter provides the following options, that can be set via command-line switches, environment variables or through the .env file:

:widths: auto

Command-line option

Environment variable

Default

Description

-H, --host

QE_HOST

localhost

Host addresses to bind. Multiple values can be provided.

-p, --port

QE_PORT

9560

Port to run the webserver on.

--metrics-path

QE_METRICS_PATH

/metrics

Path under which metrics are exposed.

-L, --log-level

QE_LOG_LEVEL

info

Minimum level for log messages level. One of critical, error, warning, info, debug.

--log-format

QE_LOG_FORMAT

plain

Log output format. One of plain, json.

--process-stats

QE_PROCESS_STATS

false

Include process stats in metrics.

--ssl-private-key

QE_SSL_PRIVATE_KEY

Full path to the SSL private key.

--ssl-public-key

QE_SSL_PUBLIC_KEY

Full path to the SSL public key.

--ssl-ca

QE_SSL_CA

Full path to the SSL certificate authority (CA).

--check-only

QE_CHECK_ONLY

false

Only check configuration, don’t run the exporter.

--config

QE_CONFIG

config.yaml

Configuration files. Multiple values can be provided.

QE_DOTENV

$PWD/.env

Path for the dotenv file where environment variables can be provided.

Metrics endpoint

The exporter listens on port 9560 providing the standard /metrics endpoint.

By default, the port is bound on localhost. Note that if the name resolves both IPv4 and IPv6 addressses, the exporter will bind on both.

Builtin metrics

The exporter provides a few builtin metrics which can be useful to track query execution:

database_errors{database="db"}:

a counter used to report number of errors, per database.

queries{database="db",query="q",status="[success|error|timeout]"}:

a counter with number of executed queries, per database, query and status.

query_interval{query="q"}:

a gauge reporting the configured execution interval in seconds, if set, per query.

query_latency{database="db",query="q"}:

a histogram with query latencies, per database and query.

query_timestamp{database="db",query="q"}:

a gauge with query last execution timestamps, per database and query.

In addition, metrics for resources usage for the exporter process can be included by passing --process-stats in the command line.

Database engines

SQLAlchemy doesn’t depend on specific Python database modules at installation. This means additional modules might need to be installed for engines in use. These can be installed as follows:

pip install SQLAlchemy[postgresql] SQLAlchemy[mysql] ...

based on which database engines are needed.

See supported databases for details.

Run in Docker

query-exporter can be run inside Docker containers, and is available from the Docker Hub:

docker run --rm -it -p 9560:9560/tcp -v "$CONFIG_DIR:/config" adonato/query-exporter:latest

where $CONFIG_DIR is the absolute path of a directory containing a config.yaml file, the configuration file to use. Alternatively, a volume name can be specified.

If a .env file is present in the specified volume for /config, its content is loaded and applied to the environment for the exporter. The location of the dotenv file can be customized by setting the QE_DOTENV environment variable.

The image has support for connecting the following databases:

  • PostgreSQL (postgresql://)

  • MySQL (mysql://)

  • SQLite (sqlite://)

  • Microsoft SQL Server (mssql://)

  • IBM DB2 (db2://) (on x86_64 architecture)

  • Oracle (oracle://)

  • ClickHouse (clickhouse+native://)

  • Teradata (teradatasql://)

A Helm chart to run the container in Kubernetes is also available.

Automated builds from the main branch are available on the GitHub container registry via:

docker pull ghcr.io/albertodonato/query-exporter:main
NOTE: GHCR images are periodically cleaned up and shouldn’t be used for

production purposes. They’re meant mainly for testing unreleased features, e.g. from the main branch or pull requests.

Base image

A base image is also available, containing only query-exporter and no additional database drivers. This can be used as a base image for installing only desired drivers, e.g.:

FROM adonato/query-exporter:<version>-base

apt-get install <my database driver>
...

Install from Snap

Get it from the Snap Store

NOTE: the snap is deprecated and will not be updated beyond 4.x releases. Please consider using Docker images instead.

query-exporter can be installed from Snap Store on systems where Snaps are supported, via:

sudo snap install query-exporter

The snap provides both the query-exporter command and a daemon instance of the command, managed via a Systemd service.

To configure the daemon:

  • create or edit /var/snap/query-exporter/current/config.yaml with the configuration

  • optionally, create a /var/snap/query-exporter/current/.env file with environment variables definitions for additional config options

  • run sudo snap restart query-exporter

The snap has support for connecting the following databases:

  • PostgreSQL (postgresql://)

  • MySQL (mysql://)

  • SQLite (sqlite://)

  • Microsoft SQL Server (mssql://)

  • IBM DB2 (db2://) (on x86_64 architecture)

Contributing

The project welcomes contributions of any form. Please refer to the contribution guide for details on how to contribute.

For general purpose questions, you can use Discussions on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

query_exporter-4.0.1.tar.gz (37.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

query_exporter-4.0.1-py3-none-any.whl (35.8 kB view details)

Uploaded Python 3

File details

Details for the file query_exporter-4.0.1.tar.gz.

File metadata

  • Download URL: query_exporter-4.0.1.tar.gz
  • Upload date:
  • Size: 37.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for query_exporter-4.0.1.tar.gz
Algorithm Hash digest
SHA256 c126f601c08935f34f2a39233e0e624bbe67cd76b0a6318dfa305919e8cb872d
MD5 6cac7aa90c60200f57533c86c3c66f84
BLAKE2b-256 138176f58826860582de188b115aa38406b292472e1c837eb27203ff9536a50a

See more details on using hashes here.

Provenance

The following attestation bundles were made for query_exporter-4.0.1.tar.gz:

Publisher: release.yaml on albertodonato/query-exporter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file query_exporter-4.0.1-py3-none-any.whl.

File metadata

  • Download URL: query_exporter-4.0.1-py3-none-any.whl
  • Upload date:
  • Size: 35.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for query_exporter-4.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 eba5234dbc6d1ffef4ca1bae584fc2a62c4c9d4f59fd8735cbfc70e3f02deecc
MD5 ead317e385687f92e0fbdcdc26bd98c6
BLAKE2b-256 cd930afc9af0c92845ba7c389910ced8816a963fed1b1a2a09c04c5676bd9d45

See more details on using hashes here.

Provenance

The following attestation bundles were made for query_exporter-4.0.1-py3-none-any.whl:

Publisher: release.yaml on albertodonato/query-exporter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page