Skip to main content

A daemon that scans reaction outputs, and serves an HTTP OpenMetrics endpoint

Project description

reaction-metrics-exporter

[!note] 💚 A lot of inspiration has been drawn from dmarc-metrics-exporter.

Export OpenMetrics for reaction. The exporter continuously monitors and parses reaction's logs and state.

The following metrics are collected and exposed through an HTTP endpoint:

  • reaction_match_total: total number of matches;
  • reaction_action_total: total number of actions;
  • reaction_pending_count: current number of pending actions.

All metrics are labelled with stream and filter. Action-related metrics have an additional action label.

⚠️ In the long-term, metrics will be integrated into reaction. Whether they will be retro-compatible depends on long-term relevance and performance.

Table of contents

Quick start

[!caution] ⚠️ Do not use in production as-is; see real-world setup.

Prerequisites

Install

python3 -m pip install reaction-metrics-exporter

It is recommended to install the exporter in a virtualenv.

Configure

Create a configuration file, e.g. config.yml:

metrics:
  # export all possible metrics
  export:
    matches:
    actions:
    pending:

reaction:
  # as you would pass to `reaction test-config`
  config: /etc/reaction
  logs:
    # monitor logs for `reaction.service`
    systemd:

persist:
  # save metrics from time to time
  folder: ~/.local/share/reaction-metrics-exporter

[!tip] Using a log file ?

reaction:
  # ...
  logs:
    # replace with you log path
    file: /var/log/reaction.log

Run

python3 -m reaction_metrics_exporter -c /etc/reaction-metrics-exporter/config.yml start

Metrics are exposed at http://localhost:8080/metrics.

💡 Metrics are written on disk every minutes by default and on exit. They are reloaded on subsequent starts.

The matches dilemma

reaction matches often contains valuable information, such as IP addresses. Exporting them as metrics' labels is kind of hackish; they should in theory be sent to a log database, but these are heavy and less common. The default configuration is conservative and do not export them.

How matches can become a problem

Quoting the Prometheus docs:

CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

For example, metrics exported from the SSH filter look like:

reaction_matches_total{stream="ssh",filter="failedlogin",ip="X.X.X.X"}: N

N being the number of matches for this unique combination of labels.

⚠️ Each new IP address will therefore create a new line in the exported data and a new time serie in the TSDB. For large instance, this can result in storage and performance issues.

Choosing exported matches

You need to explicitly specify which patterns you want to export.

For example, to export ip matches of the failedlogin filter from the openssh stream:

metrics:
  for:
    ssh:
      failedlogin:
        ip:

If you use the pattern ip in multiple streams, you can avoid repetition by exporting it globally:

metrics:
  all:
    ip:

Pre-treating matches

In some cases, you may want to transform matches prior to exporting. You can do so with Jinja2 expressions.

For example, for an email pattern, you could to keep only the domain part in metrics: first to reduce cardinality, second to avoid storing too much personal data in the TSDB. This can be achieve with:

metrics:
  all:
    email: "{{ email.split('@') | last }}

Tweaking metrics

To delete a metric from exports, simply remove the corresponding key in the configuration. You can alternatively disable matches export for individual metrics as these are essentially redundant.

To include meta-metrics (Python, GC, CPU...), add the internals key.

In this example...

metrics:
  export:
    matches:
      labels: false
    actions:
    internals:
  • matches will be exported with limited labels (stream and filters);
  • actions will be exported with all matches;
  • pending actions will not be exported;
  • meta-metrics will be exported.

Automatically forgetting metrics

You can configure the exporter to forget metrics periodically:

persist:
  # you can use any number followed by M (minutes), H (hours), d (days),
  # w (weeks), m (months) or y (years) 
  forget: 1m

This approach has a drawback: any plot relying on the absolute values of counters will reset. In practice, such plots are rare and rate or increase-like functions are used instead. Fortunately, these ignore breaks in monotonicity.

Besides, it is possible to approximate counters as if they hadn't been reset using VictoriaMetrics: see the visualization section.

🕧 The duration depends on your setup. Start without forget and monitor the size of the HTTP response and the size of your TSDB.

[!tip] 💡 A backup file is created before forgetting.

Usage details

Configuration

You can either provide a YAML file or a JSON file. Albeit not recommended, you can run the exporter without a configuration file.

The default configuration looks like this:

{
    // only stdout is supported atm
    "loglevel": "INFO",
    "listen": {
        "port": 8080,
        "address": "127.0.0.1"
    },
    "metrics": {
        "all": {},
        // ⚠️ no metrics exported by default!
        "export": {},
        "for": {}
    },
    "reaction": {
        "config": "/etc/reaction",
        "logs": {
            "systemd": "reaction.service"
        },
        // same default as reaction
        "socket": "/run/reaction/reaction.sock"
    },
    "persist": {
        // in seconds (e.g. 10 minutes)
        "interval": 600,
        "folder": "/var/lib/reaction-metrics-exporter",
        // never-ish
        "forget": "10y"
    }
}

Ingesting existing logs

You may want to calculate metrics from existing logs. Whilst possible, there are several limitations:

  • the exporter needs the configuration to be aligned with the logs, especially for stream, filters and patterns.
  • any previously exported metrics will be erased to avoid duplication.

The following command reads all known logs, calculate metrics, saves them and exits.

python3 -m reaction_metrics_exporter -c config.yml init

You can then launch the usual command (start).

👉 Use this command if something has gone wrong with your metrics (hopefully not) and that you have kept the logs.

[!warning] ⚠️ Unless... If metrics have already been forgotten then exporter, re-ingesting previous metrics will mess up some visualizations, as sudden bumps cannot be ignored in calculations (contrary to resets).

Commands

usage: python -m reaction_metrics_exporter [-h] [-c CONFIG] [-f] [-y] {init,start,clear,defaults,test-config}

positional arguments:
  {init,start,clear,defaults,test-config}
                        mode of operation; see below

options:
  -h, --help            show this help message and exit
  -c, --config CONFIG   path to the configuration file (JSON or YAML)
  -f, --force           force clear even if backup is impossible, then delete backup
  -y, --yes             disable interaction. caution with init and clear

command:
    init: read all existing logs, compute metrics, save on disk and exit
    start: continuously read **new** logs, compute and save metrics; serve HTTP endpoint
    clear: make a backup and delete all existing metrics (-f to force)
    defaults: print the default configuration in json
    test-config: validate and output configuration in json

Real-world setup

Create an unprivileged user

The exporter should run with an unprivileged, system user. Among numerous reasons:

  • the exporter is exposed on the web;
  • it parses arbitrary data;
  • it has a lot of dependencies;
  • I am neither a developer nor a security expert.

This user should be able to read journald logs and to communicate with reaction's socket.

First create a user and a group, then add the user to the systemd-journal group.

# creates group automatically
/sbin/adduser reaction --no-create-home --system
usermod -aG systemd-journal reaction

Then, open an editor to modify reaction's service:

systemctl edit reaction.service

Paste the following under the [Service] section:

# Files (inc. socket) created by reaction will be owned by this group
Group=reaction-metrics-exporter
# Group will have permission for read and write
UMask=0002

Restart reaction:

systemctl daemon-reload
systemctl restart reaction

Check that you should be able to communicate with reaction and to read the journal as the user.

sudo su reaction-metrics-exporter
reaction show
journalctl -feu reaction

Running with systemd

A service file file is provided: save it to /etc/systemd/systemd.

You will need to adjust the configuration path (and hopefully the Python path of your venv) in the ExecStart= directive.

💡 Persistence directory is created automatically by systemd in /var/lib.

Enable and start the exporter:

systemctl daemon-reload
systemctl enable --now reaction-metrics-exporter.service

Follow the logs with:

journalctl -feu reaction-metrics-exporter.service

Running with Docker

[!caution] ⬆️ Make sure you completed the rootless setup.

Start inside the docker directory.

Create a .env file:

UID=
GID=
JOURNAL_GID=

Values can be found out of command id reaction.

You may need to adjust the default mounts in compose.yml. Expectations are:

  • reaction's configuration mounted on /etc/reaction;
  • reaction's socket mounted on /run/reaction/reaction.sock;
  • journald file mounted on /var/log/journal.

A sample configuration file is provided. Tweak it to fit your needs (don't forget to add matches if needed).

If you want to init:

docker compose up rme-init

To start exposing metrics:

docker compose up -d rme && docker compose logs -f

The exporter is mapped to the host's 8081 port by default.

[!tip] Optionally, you can build the image yourself:

docker compose build

Visualising data

🚧 WIP !

Development setup

In addition of the prerequisites, you need Poetry.

# inside the cloned repository
poetry install
# run app
poetry run python -m reaction_metrics_exporter [...]
# run tests
poetry run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reaction_metrics_exporter-0.1.3.tar.gz (36.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reaction_metrics_exporter-0.1.3-py3-none-any.whl (37.1 kB view details)

Uploaded Python 3

File details

Details for the file reaction_metrics_exporter-0.1.3.tar.gz.

File metadata

  • Download URL: reaction_metrics_exporter-0.1.3.tar.gz
  • Upload date:
  • Size: 36.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.13.5 Linux/6.15.7-arch1-1

File hashes

Hashes for reaction_metrics_exporter-0.1.3.tar.gz
Algorithm Hash digest
SHA256 eef589d4842b8bf2620209cb93610ec213794a01562806f2e8993e2018a159c1
MD5 d43aacf7ec874a93fd625738c36315b8
BLAKE2b-256 718603052944d25235ea5512c1eec783a9547f3875b8c6e0d1d6f970fbd992c5

See more details on using hashes here.

File details

Details for the file reaction_metrics_exporter-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for reaction_metrics_exporter-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 75898dfbd29269af5e3400800fcf80177ab540afc4ad919338348643d62c5014
MD5 9dee1f99908c364b77d63f86426967b6
BLAKE2b-256 52bb5083a4d1cbcbf886b7133fddf199131b5924a050016195d60e0e507af2ad

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page