Skip to main content

A daemon that scans reaction outputs, and serves an HTTP OpenMetrics endpoint

Project description

reaction-metrics-exporter

[!note] 💚 A lot of inspiration has been drawn from dmarc-metrics-exporter.

Export OpenMetrics for reaction. The exporter continuously monitors and parses reaction's logs and state.

Main metrics

  • reaction_match_count: number of matches since last collection;
  • reaction_action_count: number of actions since last collection;
  • reaction_pending_count: current number of pending actions.

All metrics are labelled with stream, filter and matched patterns. Action-related metrics have an additional action label.

For example, matches exported from the SSH filter look like:

reaction_match_count{stream="ssh",filter="failedlogin",ip="X.X.X.X"}: N

N being the number of matches since last collection for this unique combination of labels.

Secondary metrics

  • reaction_exporter_build_info: value always 1, labelled with the version of the exporter;
  • reaction_status_info: value always 1, labelled with version of reaction and its status;
  • reaction_commands_count: count of start/stop commands run by reaction, labelled with command name, status and exit code, since the last collection;
  • reaction_stream_event_count: count of stream events since the last collection, labelled with stream name and event type, which is one of:
    • start: stream started successfully;
    • exit: stream exited;
    • exec: stream is unexecutable;
    • read: stream is unreadable;
    • timeout: stream didn't terminate on time on exit.

Side notes

ℹ️ In the long-term, reaction probably will expose builtin metrics. Whether the exporter's ones will be adopted depends on long-term relevance and performance.

⚠️ For very large numbers of matches and actions, your TSDB may grow too much. Read "the patterns dilemma" if in doubt.

Table of contents

Quick start

[!caution] ⚠️ Do not use in production as-is; see real-world setup.

Prerequisites

To run the exporter

To exploit the metrics

Install

python3 -m pip install reaction-metrics-exporter

Configure

Create a configuration file, e.g. config.yml:

reaction:
  # as you would pass to `reaction test-config`
  config: /etc/reaction
  logs:
    # monitor logs for `reaction.service`
    systemd:

[!tip] Using a log file ?

reaction:
  # ...
  logs:
    file: /var/log/reaction.log

Run

python3 -m reaction_metrics_exporter -c config.yml start

Metrics are exposed at http://localhost:8080/metrics.

💡 The exporter supports an arbitrary 15-minute window maintenance, i.e. it saves the last read log and the unexported metrics so it can restart at this point.

Real-world setups

Create an unprivileged user

For security reasons, the exporter should run as an unprivileged, system user.

This user should be able to read journald logs and to communicate with reaction's socket.

To do so, first create a user and a group, then add the user to the systemd-journal group.

# creates group automatically
/sbin/adduser reaction --no-create-home --system
usermod -aG systemd-journal reaction

Then, open an editor to configure reaction.service :

systemctl edit reaction.service

Paste the following and save:

[Service]
Group=reaction
# Files will be created with rwxrwxr_x so the exporter
# can communicate with reaction through the socket
UMask=0002

Restart reaction:

systemctl daemon-reload
systemctl restart reaction

[!tip] Check that the user is able to work with reaction

sudo su reaction
reaction show
journalctl -feu reaction

Running with systemd

✅ It is recommended to install the exporter in a virtualenv, e.g. in /usr/share/reaction-metrics-exporter/venv.

Save the service file in /etc/systemd/systemd.

💡 Add your config file and adjust the venv path in the ExecStart= directive.

Enable and start the exporter:

systemctl daemon-reload
systemctl enable --now reaction-metrics.service

Follow the logs with:

journalctl -feu reaction-metrics.service

Running with Docker

⚠️ Running with Docker adds complexity for the exporter; prefer systemd.

Start inside the docker directory.

[!tip] Homemade is better It is recommended to build you own image so that the version of reaction inside the container matches yours. In compose.yml, adjust REACTION_VERSION variable, and then:

docker compose build

Create a .env file:

UID=
GID=
JOURNAL_GID=

Values can be found out of command id reaction.

You may need to adjust the default mounts in compose.yml. Expectations are:

  • reaction's configuration is mounted on /etc/reaction;
  • reaction's socket is mounted on /run/reaction/reaction.sock;
  • journald file is mounted on /var/log/journal.

Use the sample configuration file and tweak it to your needs and run the exporter:

docker compose up -d && docker compose logs -f

The exporter is mapped to the host's 8080 port by default.

ℹ️ The container will restart automatically until logs and socket are accessible.

Usage details

Configuration

You can either provide a YAML file or a JSON file. Albeit not recommended, you can run the exporter without a configuration file.

The default configuration is as follows:

# only stdout is supported atm
loglevel: INFO
listen:
  address: 127.0.0.1
  port: 8080
reaction:
  config: /etc/reaction
  logs:
    systemd: reaction.service
  # same default as reaction
  socket: /run/reaction/reaction.sock
  # ignore actions for 60 seconds (in logtime, not realtime) after reaction start
  hold: 60
metrics:
  all: {}
  for: {}
  # all metrics with labels are exported by default
  export:
    matches:
      extra: true
    actions:
      extra: true
    pending:
      extra: true
    commands:
    streams:

Commands

usage: python -m reaction_metrics_exporter [-h] [-c CONFIG] [-f {yaml,json}] {start,defaults,test-config,version}

positional arguments:
  {start,defaults,test-config,version}
                        mode of operation; see below

options:
  -h, --help            show this help message and exit
  -c, --config CONFIG   path to the configuration file (JSON or YAML)
  -f, --format {yaml,json}
                        format for dumping default configuration (defautl yaml)

command:
    start: continuously read logs, compute metrics and serve HTTP endpoint
    defaults: print the default configuration in json and exit
    test-config: validate and output configuration in json and exit
    version: print the exporter's version and exit

Pre-treating matches

You may want to transform matches prior to exporting instead of relabelling. You can do so with Jinja2 expressions.

For example, for an email pattern, you could keep only the domain part in metrics.s

metrics:
  all:
    email: "{{ email.split('@') | last }}

You can also differentiate by stream and filter:

metrics:
  for:
    ssh:
      failedlogin:
        ip: TEMPLATE_A
    traefik:
      aiBots:
        ip: TEMPLATE_B

Enabling internals metrics

The Prometheus client library has some defauts metrics about Python, Garbage Collector and so on, that I found useless. You can nevertheless enable them:

metrics:
  export:
    internals:

The patterns dilemma

reaction matches often contains valuable information, such as IP addresses.

How matches could become a problem

Quoting the Prometheus docs:

CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

In other words, each new IP address will create a new time serie.

Prometheus used to claim being able of handling millions of time series, and VictoriaMetrics claims being able of handling 100 million active time series. Besides, each time series have very few data points (each one necessitating a match). In the end we use OpenMetrics in a kind of hackish way, so that recommandations tailored for active and dense time series do not apply.

You should just test and check after a few months. In most cases, the retention period will kick before you run into troubles.

You can still disable the export of some metrics or patterns.

Disable metrics

metrics:
  export:
    actions: false
    matches: false
    pending: false

Disable patterns

For example, to remove ip from export for the failedlogin filter from the openssh stream:

metrics:
  for:
    ssh:
      failedlogin:
        ip: false

If you use the pattern ip in multiple streams, you can avoid repetition by removing it globally:

metrics:
  all:
    ip: false

Exploitation

Scraping

Sample Prometheus scrape config:

  # must start with `reaction`
  - job_name: "reaction"
    scrape_interval: 60s
    # if behind HTTP basic auth
    basic_auth:
      username: "${REACTION_METRICS_USER}"
      password: "${REACTION_METRICS_PASSWORD}"
    static_configs:
      - targets:
        - subdomain1.<domain>
        - subdomain2.<domain>
        - [...]
    # only keep subdomains
    # recommended if <domain> is always the same,
    # to make tables more readable.
    relabel_configs:
      - source_labels: [__address__]
        regex: "(.*).<domain>"
        target_label: instance
        replacement: "$1"

Visualisation

Two dashboards are provided:

The later only have a few more panels.

Dashboards have been built with multi-instances and ip matching, but can be easily adapted for other setups.

Variables

All the queries are templated with the following variables:

It can be useful to narrow down a research.

Status

Health and versions of the exporter and of reaction.

Stats

Cumulative sums of events for selected range and for the last year. You will find this pattern everywhere in the panels.

Actions

Insights for actions triggered within the selected range. "Pending actions" are the ones printed by reaction show.

Evolution of events

Basic plots, by instances. Note that they will always start from 0, so that evolution within the range is clear.

Filters

Matches are aggregated by filter and per instance, hopefuly giving insights about which machine is targeted by what.

Reaction's events

Boring plots which look like this. If everything goes well, they should be almost empty. They are probably more useful for alerts (e.g. when a stream exited, when a stop command failed, and so on).

IP blame

Then you have those kind of tables, each time for range and for year, highlighting e.g. IP that have been matched a lot.

Another example is IP matched in different filters, which can ask further investigation. In that case you can template every panel with that specific IP:

That allow for basic information gathering (which instance(s), which filter(s), time of presence and so on).

Lifetime

Real usefulness yet to be shown, I feel it lacks something. Lifetime is the time between the last time and the first time a unique combination of labels has been seen. In other words, it tells us whether an IP did a oneshot match or a long run.

On the left is the evolution of average lifetime per instance for the range. It could help to determine if certain instances are targeted with insistance.

On the right is the distribution of lifetime for the range, per instance.

Alerts

Basic queries that can be used in Grafana, Prometheus or vmalert to warn Alertmanager. Given in Prometheus format.

# Reaction is down
- alert: ReactionDown
  expr: reaction_status_info{status="0"} == 1
  for: 5m
# More than one match per second on an instance
- alert: ReactionHighMatches
  expr: rate(sum(reaction_match_count) by (instance)[5m]) >= 1
  for: 5m
# A stream exited or cannot be read
- alert: ReactionStreamError
  expr: sum(reaction_stream_event_count{type!="start"}) by (instance, type) > 0
# A start/stop command failed
- alert: ReactionCommandError
  expr: sum(reaction_commands_count{status!="True"}) by (instance) > 0
# Detects an IP matching on one or several machines without
# being banned (adjust X and T to your needs)
- alert: ReactionIPUnbanned
  expr: sum(reaction_match_count[T]) by (ip) > X

Details about design and usage of metrics

Using OpenMetrics is wrong but useful

The exporter does not quite follow Prometheus' philosophy. For example, it uses gauges for counting matches, actions and so on.

Let's take the example of matches. Normally, gauge "represents a single numerical value that can arbitrarily go up and down". In our case, this single numerical value will give the number of matches since the last collection. Without extra labels, such as IPs, it would fit the model.

But when exporting pattern's values, such as IPs, the result is a multitude of short-lived time series. As an example:

reaction_match_count{filter="aiBots",ip="121.229.156.18",stream="traefik"} 1.0
reaction_match_count{filter="aiBots",ip="23.21.179.120",stream="traefik"} 1.0
reaction_match_count{filter="failedlogin",ip="77.201.74.44",stream="ssh"} 3.0

These metrics will be ingested as three distinct timeseries and discarded on the exported side. They will get a new data point only on the next hypothetical identical match.

Semantically, we are exporting events, which is discouraged. Events are usually processed with a log aggregator, e.g. Grafana Loki, or could be aggregated by a Prometheus middleware, e.g. prom-aggregation-gateway. We made the choice to stick to OpenMetrics because the TSBD/Grafana setup is simple and widespread.

In the early development, aggregation was made locally, i.e. the total count for each unique combination of stream, filter and matches was exported continuously, with restarts handled by a persistence file, but it made no sense: the HTTP body would grow indefinitly and each timeserie would get a new datapoint at each export while no event has happened.

Above all considerations, the exported metrics had to be exploitable through a comprehensive Grafana dashboard, and they are, minus we don't have yet high-volume tests.

Emulate monotonic counters from events

A natural expectation would be to visualize the evolution of matches, actions and so on. With our event-based export, the only way would be to do a cumulative sum of timeseries for the current time window, starting from 0.

In theory, this is not possible with Prometheus. VictoriaMetrics, beside way higher performance, has a lot of useful built-in functions, including running sum.

We suppose the query is executed in Grafana, which sets start, stop, range (stop - start) and interval (range / max data points, also known as step).

Using VictoriaMetrics

If you're unfamiliar we MetricsQL, you can read the beggining of the docs. It has a lot in common with PromQL, with additional functions and sugar-syntax.

A naive approach would be:

running_sum(reaction_match_count)

That request will calculate, for each time serie (i.e. each unique combination of stream, filter and matches), the cumulative sum for the range. As the number of timeseries is huge when exporting extra labels, is it useful to aggregate results e.g. per instance:

sum(running_sum(reaction_match_count)) by (instance)

However, this request will give wrong results for large time windows. The above query is in fact converted (more or less) by VictoriaMetrics to:

sum(running_sum(default_rollup(reaction_match_count[step]))) by (instance)

default_rollup simply takes the last value of the range vector, i.e. the result is equivalent to reaction_match_count quantized every 5 minutes. In other words, every event occuring during that time is discarded.

To fix the query, we need to compute the cumulative sum inside the step. The final query will look like this:

sum(running_sum(sum_over_time(reaction_match_count))) by (instance)

sum_over_time will automatically use the step as a lookbehind window.

⚠️ We cannot nest running_sum, because it needs rollup results; the inner running_sum would therefore apply on default_rollup, calculate rollups, and the outer running_sum would give crazy results, calculating the cumulative sum of the initial wrong cumulative sum.

Wrapping up, this is an efficient way to emulate Prometheus counters, in a given time window.

Using Prometheus

Doing the same in Prometheus is a whole different ballgame. A question that may arise is: why don't we simply sum_over_time over the whole window ($__range in Grafana)?

sum_over_time(reaction_match_count[$__range])

Let's say our time window is 2 days (2d). The problem is that sum_over_time being a rollup function, or a moving function, the first value shown on the graph will be the cumulative sum of [-4d, -2d]. Values for the last 2 days will be kind of smoothed by old values.

Ideally, we would like to ask Prometheus to have a fixed starting point (-2d), and a moving end point. This is really against the way Prometheus works, because the query range is made to be static, not dynamic.

After a lot of infructuous trials, I found a way to do it. It takes advantage of:

  • Grafana exposing the $__from variable, which is the timestamp of the start of the range;
  • The time() function which returns the timestamp currently being evaluated (it ranges from start to end with step increments).

The trick is as follows: first, build a timeserie made of timestamps:

vector(time()) > $__from/1000

Second, build "rollup-resistant" range-vectors:

(reaction_match_count and on() (vector(time()) > $__from/1000))[$__range:]

The and operator will drop every data point in the left part which is not in the right part. The on() clause has to do with vector matching; here it means that no label is required to match, i.e. every point with the same timestamp will match.

Finally we can compute the cumulative sum over $__range:

sum(sum_over_time((reaction_match_count and on() (vector(time()) > $__from/1000))[$__range:]))

The first point will be 0, exactly what we want. This is because to compute the first point, Prometheus will start from -2d. Therefore, the outer sum will be, in theory, calculated with data from ]-4d, -2d]. But our fixed timeserie has data within ]-2d, now]. In other words, Prometheus find no values, giving a 0. Then, the window will slide by step, and the cumulative sum from [-2d, -2d+step] will give the second value, and so on.

The drawback is these requests perform poorly on large ranges (like several weeks).

Development setup

In addition of the prerequisites, you need Poetry.

# inside the cloned repository
poetry install
# run app
poetry run python -m reaction_metrics_exporter [...]
# run tests
poetry run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reaction_metrics_exporter-1.0.1.tar.gz (43.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reaction_metrics_exporter-1.0.1-py3-none-any.whl (41.2 kB view details)

Uploaded Python 3

File details

Details for the file reaction_metrics_exporter-1.0.1.tar.gz.

File metadata

  • Download URL: reaction_metrics_exporter-1.0.1.tar.gz
  • Upload date:
  • Size: 43.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.14.2 Linux/6.18.9-arch1-2

File hashes

Hashes for reaction_metrics_exporter-1.0.1.tar.gz
Algorithm Hash digest
SHA256 53585be7b97d9e4327dea20d9dd70be0aac91626b3da5fd96d3e05748ad145e9
MD5 3bbcfaa231b3f8299efa4bd2286f934e
BLAKE2b-256 9f4048c1ff0dd217733927ca7ddf88a77a05b885d9b4dc3e2fa49f7480419cab

See more details on using hashes here.

File details

Details for the file reaction_metrics_exporter-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for reaction_metrics_exporter-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e8f609d3cdf8ec61e02eed3f1635e12487f3fc24dc41739019f31e669125dd2b
MD5 2a829161147c07f25ee48dc6b7e00aac
BLAKE2b-256 fc93b12e70750428360491b6b9ff4eeed817827500cd1192593cea04767d5885

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page