Skip to main content

A daemon that scans reaction outputs, and serves an HTTP OpenMetrics endpoint

Project description

reaction-metrics-exporter

[!note] 💚 A lot of inspiration has been drawn from dmarc-metrics-exporter.

Export OpenMetrics for reaction. The exporter continuously monitors and parses reaction's logs and state.

The following metrics are collected and exposed through an HTTP endpoint:

  • reaction_match_total: total number of matches;
  • reaction_action_total: total number of actions;
  • reaction_pending_count: current number of pending actions.

All metrics are labelled with stream, filter and matched patterns. Action-related metrics have an additional action label.

For example, matches exported from the SSH filter look like:

reaction_matches_total{stream="ssh",filter="failedlogin",ip="X.X.X.X"}: N

N being the number of matches for this unique combination of labels.

ℹ️ In the long-term, metrics will be integrated into reaction. Whether they will be retro-compatible depends on long-term relevance and performance.

⚠️ In the long-term and for high number of matches and actions, your TSDB may grow too much. Read further if you have a doubt.

Table of contents

Quick start

[!caution] ⚠️ Do not use in production as-is; see real-world setup.

Prerequisites

Install

python3 -m pip install reaction-metrics-exporter

Configure

Create a configuration file, e.g. config.yml:

reaction:
  # as you would pass to `reaction test-config`
  config: /etc/reaction
  logs:
    # monitor logs for `reaction.service`
    systemd:

[!tip] Using a log file ?

reaction:
  # ...
  logs:
    # replace with your log path
    file: /var/log/reaction.log

Run

python3 -m reaction_metrics_exporter -c /etc/reaction-metrics-exporter/config.yml start

Metrics are exposed at http://localhost:8080/metrics.

Real-world setups

Create an unprivileged user

For security reasons, the exporter should run as an unprivileged, system user. This is a prerequisite for what's next.

This user should be able to read journald logs and to communicate with reaction's socket.

To do so, first create a user and a group, then add the user to the systemd-journal group.

# creates group automatically
/sbin/adduser reaction --no-create-home --system
usermod -aG systemd-journal reaction

Then, open an editor to override some settings of reaction :

systemctl edit reaction.service

Paste the following and save:

[Service]
# Reaction will run as this group
Group=reaction
# Files will be created with rwxrwxr_x
UMask=0002

Restart reaction:

systemctl daemon-reload
systemctl restart reaction

[!tip] Check that it worked

sudo su reaction
reaction show
journalctl -feu reaction

Running with systemd

It is recommended to install the exporter in a virtualenv.

Save the service file to /etc/systemd/systemd. Adjust the venv path and the configuration path (and hopefully the Python path of your venv) in the ExecStart= directive.

Enable and start the exporter:

systemctl daemon-reload
systemctl enable --now reaction-metrics-exporter.service

Follow the logs with:

journalctl -feu reaction-metrics-exporter.service

Running with Docker

Start inside the docker directory.

Create a .env file:

UID=
GID=
JOURNAL_GID=

Values can be found out of command id exporter-reaction.

You may need to adjust the default mounts in compose.yml. Expectations are:

  • reaction's configuration mounted on /etc/reaction;
  • reaction's socket mounted on /run/reaction/reaction.sock;
  • journald file mounted on /var/log/journal.

Use the sample configuration file and tweak it to your needs and run the exporter:

docker compose up -d rme && docker compose logs -f

The exporter is mapped to the host's 8080 port by default.

[!tip] Optionally, you can build the image yourself:

docker compose build

Usage details

Configuration

You can either provide a YAML file or a JSON file. Albeit not recommended, you can run the exporter without a configuration file.

The default configuration is as follows:

{
    // only stdout is supported atm
    "loglevel": "INFO",
    "listen": {
        "port": 8080,
        "address": "127.0.0.1"
    },
    // all metrics with labels are exported by default
    "metrics": {
        "all": {},
        "export": {
            "actions": {
                "extra": true
            },
            "matches": {
                "extra": true
            },
            "pending": {
                "extra": true
            }
        },
        "for": {}
    },,
    "reaction": {
        "config": "/etc/reaction",
        "logs": {
            "systemd": "reaction.service"
        },
        // same default as reaction
        "socket": "/run/reaction/reaction.sock"
    },
}

Commands

usage: python -m reaction_metrics_exporter [-h] [-c CONFIG] {start,defaults,test-config}

positional arguments:
  {start,defaults,test-config}
                        mode of operation; see below

options:
  -h, --help            show this help message and exit
  -c, --config CONFIG   path to the configuration file (JSON or YAML)

command:
    start: continuously read logs, compute metrics and serve HTTP endpoint
    defaults: print the default configuration in json
    test-config: validate and output configuration in json

Pre-treating matches

In some cases, you may want to transform matches prior to exporting, instead of relabelling. You can do so with Jinja2 expressions.

For example, for an email pattern, you could to keep only the domain part in metrics. This can be achieved with:

metrics:
  all:
    email: "{{ email.split('@') | last }}

You can also differentiate by stream and filter:

metrics:
  for:
    ssh:
      failedlogin:
        ip: TEMPLATE_A
    traefik:
      aiBots:
        ip: TEMPLATE_B

Enabling internals metrics

The Prometheus client library has some defauts metrics about Python, Garbage Collector and so on, that I found useless. You can nevertheless enable them:

metrics:
  export:
    internals:

The patterns dilemma

reaction matches often contains valuable information, such as IP addresses.

How matches could become a problem

Quoting the Prometheus docs:

CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

In other words, each new IP address will therefore create a new time serie in the TSDB. For large instances, this may result in storage and performance issues.

However, TSBDs can handle our metrics for most cases. Prometheus used to claim being able of handling millions of time series, and VictoriaMetrics claims being able of handling 100 million active time series. Besides, our time series usually have very few data points. We use OpenMetrics in a kind of hackish way, so that recommandations tailored for active and dense time series do not apply.

You should just test and check after a few months. In most cases, the retention period will kick before you run into troubles.

You can still disable the export of some metrics or patterns, but this will break the Grafana dashboard.

Disable metrics

metrics:
  export:
    actions: false
    matches: false
    pending: false

Disable patterns

For example, to remove ip from export for the failedlogin filter from the openssh stream:

metrics:
  for:
    ssh:
      failedlogin:
        ip: false

If you use the pattern ip in multiple streams, you can avoid repetition by removing it globally:

metrics:
  all:
    ip: false

Visualising data

🚧 WIP !

Development setup

In addition of the prerequisites, you need Poetry.

# inside the cloned repository
poetry install
# run app
poetry run python -m reaction_metrics_exporter [...]
# run tests
poetry run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reaction_metrics_exporter-0.2.2.tar.gz (31.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reaction_metrics_exporter-0.2.2-py3-none-any.whl (32.7 kB view details)

Uploaded Python 3

File details

Details for the file reaction_metrics_exporter-0.2.2.tar.gz.

File metadata

  • Download URL: reaction_metrics_exporter-0.2.2.tar.gz
  • Upload date:
  • Size: 31.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.13.5 Linux/6.15.6-arch1-1

File hashes

Hashes for reaction_metrics_exporter-0.2.2.tar.gz
Algorithm Hash digest
SHA256 630f89791a36dd2224a9804214298e66e0c3dfeb90abcf6c751d165298d9cb1d
MD5 131741809834cc7b95c6a11e9958aa00
BLAKE2b-256 1ac1efcd2a7c61b4dc05c236af0b0bc76043f05b6190e131961e0246fccf7172

See more details on using hashes here.

File details

Details for the file reaction_metrics_exporter-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for reaction_metrics_exporter-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 aa8ebbe7e104c3a1e548261ef2ac88d271c7c8cbdbbec19fb408e227a64439f7
MD5 d7dae4e62a6181d02f90cc786c9eca2b
BLAKE2b-256 d70f9a4425b6d21bbe123ebac4277875e33d83a5de392aae5c5f34b88b4efec1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page