Skip to main content

Exports Resoto metrics in Prometheus format.

Project description

resotometrics

Resoto Prometheus exporter

Table of contents

Overview

resotometrics takes resotocore graph data and runs aggregation functions on it. Those aggregated metrics are then exposed in a Prometheus compatible format. The default TCP port is 9955 but can be changed using the resotometrics.web_port config attribute.

More information can be found below and in the docs.

Usage

resotometrics uses the following commandline arguments:

  --subscriber-id SUBSCRIBER_ID
                        Unique subscriber ID (default: resoto.metrics)
  --override CONFIG_OVERRIDE [CONFIG_OVERRIDE ...]
                        Override config attribute(s)
  --resotocore-uri RESOTOCORE_URI
                        resotocore URI (default: https://localhost:8900)
  --verbose, -v         Verbose logging
  --quiet               Only log errors
  --psk PSK             Pre-shared key
  --ca-cert CA_CERT     Path to custom CA certificate file
  --cert CERT           Path to custom certificate file
  --cert-key CERT_KEY   Path to custom certificate key file
  --cert-key-pass CERT_KEY_PASS
                        Passphrase for certificate key file
  --no-verify-certs     Turn off certificate verification

ENV Prefix: RESOTOMETRICS_ Every CLI arg can also be specified using ENV variables.

For instance the boolean --verbose would become RESOTOMETRICS_VERBOSE=true.

Once started resotometrics will register for generate_metrics core events. When such an event is received it will generate Resoto metrics and provide them at the /metrics endpoint.

A prometheus config could look like this:

scrape_configs:
  - job_name: "resotometrics"
    static_configs:
      - targets: ["localhost:9955"]

Details

Resoto core supports aggregated queries to produce metrics. Our common library resotolib define a number of base resources that are common to a lot of cloud proviers, like say compute instances, subnets, routers, load balancers, and so on. All of those ship with a standard set of metrics specific to each resource.

For example, instances have CPU cores and memory, so they define default metrics for those attributes. Right now metrics are hard coded and read from the base resources, but future versions of Resoto will allow you to define your own metrics in resotocore and have resotometrics export them.

For right now you can use the aggregate API at {resotocore}:8900/graph/{graph}/reported/search/aggregate or the aggregate CLI command to generate your own metrics. For API details check out the resotocore API documentation as well as the Swagger UI at {resotocore}:8900/api-doc/.

In the following we will be using the Resoto shell resh and the aggregate command.

Example

Enter the following commands into resh

search is(instance) | aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type : sum(1) as instances_total, sum(instance_cores) as cores_total, sum(instance_memory*1024*1024*1024) as memory_bytes

Here is the same query with line feeds for readability (can not be copy'pasted)

search is(instance) |
  aggregate
    /ancestors.cloud.reported.name as cloud,
    /ancestors.account.reported.name as account,
    /ancestors.region.reported.name as region,
    instance_type as type :
  sum(1) as instances_total,
  sum(instance_cores) as cores_total,
  sum(instance_memory*1024*1024*1024) as memory_bytes

If your graph contains any compute instances the resulting output will look something like this

---
group:
  cloud: aws
  account: someengineering-platform
  region: us-west-2
  type: m5.2xlarge
instances_total: 6
cores_total: 24
memory_bytes: 96636764160
---
group:
  cloud: aws
  account: someengineering-platform
  region: us-west-2
  type: m5.xlarge
instances_total: 8
cores_total: 64
memory_bytes: 257698037760
---
group:
  cloud: gcp
  account: someengineering-dev
  region: us-west1
  type: n1-standard-4
instances_total: 12
cores_total: 48
memory_bytes: 193273528320

Let us dissect what we've written here:

  • search is(instance) fetch all the resources that inherit from base kind instance. This would be compute instances like aws_ec2_instance or gcp_instance.
  • aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type aggregate the instance metrics by cloud, account, and region name as well as instance_type (think GROUP_BY in SQL).
  • sum(1) as instances_total, sum(instance_cores) as cores_total, sum(instance_memory*1024*1024*1024) as memory_bytes sum up the total number of instances, number of instance cores and memory. The later is stored in GB and here we convert it to bytes as is customary in Prometheus exporters.

Taking it one step further

search is(instance) and instance_status = running | aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type : sum(/ancestors.instance_type.reported.ondemand_cost) as instances_hourly_cost_estimate

Again the same query with line feeds for readability (can not be copy'pasted)

search is(instance) and instance_status = running |
  aggregate
    /ancestors.cloud.reported.name as cloud,
    /ancestors.account.reported.name as account,
    /ancestors.region.reported.name as region,
    instance_type as type :
  sum(/ancestors.instance_type.reported.ondemand_cost) as instances_hourly_cost_estimate

Outputs something like

---
group:
  cloud: gcp
  account: maestro-229419
  region: us-central1
  type: n1-standard-4
instances_hourly_cost_estimate: 0.949995

What did we do here? We told Resoto to find all resource of type compute instance (search is(instance)) with a status of running and then merge the result with ancestors (parents and parent parents) of type cloud, account, region and now also instance_type.

Let us look at two things here. First, in the previous example we already aggregated by instance_type. However this was the string attribute called instance_type that is part of every instance resource and contains strings like m5.xlarge (AWS) or n1-standard-4 (GCP).

Example

> search is(instance) | tail -1 | format {kind} {name} {instance_type}
aws_ec2_instance i-039e06bb2539e5484 t2.micro

What we did now was ask Resoto to go up the graph and find the directly connected resource of kind instance_type.

An instance_type resource looks something like this

> search is(instance_type) | tail -1 | dump
reported:
  kind: aws_ec2_instance_type
  id: t2.micro
  tags: {}
  name: t2.micro
  instance_type: t2.micro
  instance_cores: 1
  instance_memory: 1
  ondemand_cost: 0.0116
  ctime: '2021-09-28T13:10:08Z'

As you can see, the instance type resource has a float attribute called ondemand_cost which is the hourly cost a cloud provider charges for this particular type of compute instance. In our aggregation query we now sum up the hourly cost of all currently running compute instances and export them as a metric named instances_hourly_cost_estimate. If we now export this metric into a timeseries DB like Prometheus we are able to plot our instance cost over time.

This is the core functionality resotometrics provides.

Contact

If you have any questions feel free to join our Discord or open a GitHub issue.

License

See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

resotometrics-3.9.0.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

resotometrics-3.9.0-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file resotometrics-3.9.0.tar.gz.

File metadata

  • Download URL: resotometrics-3.9.0.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for resotometrics-3.9.0.tar.gz
Algorithm Hash digest
SHA256 e3edcfa88bd39ea9f528ff5617e969ac45c97fcd83905ba7f61b81e7d964de11
MD5 ddec29e712a441551b533b1b9405414a
BLAKE2b-256 b723ac55b6e5e2396d503c692e00849ef500b511f0515b03cdc12e6bb5d9e976

See more details on using hashes here.

File details

Details for the file resotometrics-3.9.0-py3-none-any.whl.

File metadata

File hashes

Hashes for resotometrics-3.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 217e081948b5b5e70fbc4c6859b8965c223708a3cb097386aa9e218ddb54b1f3
MD5 eb772af113a24e1f7d04eb7b4dcc7bee
BLAKE2b-256 ca5e89525e27fb28a2f37e25733adc1ec7983e53d6e2d665dd075bf41c07300c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page