Skip to main content

Scrape only relevant metrics in Prometheus, according to your Grafana dashboards

Project description

frigga

testing

frigga-logo

Do you have a Grafana instance? frigga makes sure you don’t scrape metrics in Prometheus, which you don’t present in Grafana dashboards.

Scrape only relevant metrics in Prometheus, according to your Grafana dashboards, see the before and after snapshot. frigga generates keep filters on metric_relabel_configs, and adds them to your prometheus.yml file

frigga is extremely useful for Grafana Cloud customers since the pricing is per DataSeries ingestions.

Illustration

Expand/Collapse
frigga-logo

Requirements

Python 3.6.7+

Installation

$ pip install frigga

Docker

docker run --rm -it unfor19/frigga

For ease of use, add an alias in your ~/.bashrc file

alias frigga="docker run --rm -it unfor19/frigga"

Available Commands

Auto-generated by unfor19/replacer-action, see readme.yml

Usage: frigga [OPTIONS] COMMAND [ARGS]...

  No confirmation prompts

Options:
  -ci, --ci  Use this flag to avoid confirmation prompts
  --help     Show this message and exit.

Commands:
  client-start       Alias: cs
  grafana-list       Alias: gl
  prometheus-apply   Alias: pa
  prometheus-get     Alias: pg
  prometheus-reload  Alias: pr
  version            Print the installed version
  webserver-start    Alias: ws

Getting Started

  1. Grafana - Import the dashboard frigga - Jobs Usage (ID: 12537) to Grafana, and check out the number of DataSeries

  2. Grafana - Generate an API Key for Viewer

  3. frigga - Get the list of metrics that are used in your Grafana dasboards

    $ frigga gl
    
    # gl is grafana-list, or good luck :)
    
    Grafana url [http://localhost:3000]: http://my-grafana.grafana.net
    Grafana api key: (hidden)
    >> [LOG] Getting the list of words to ignore when scraping from Grafana
    ...
    >> [LOG] Found a total of 269 unique metrics to keep
    

    .metrics.json - automatically generated in pwd

    {
        "all_metrics": [
            "cadvisor_version_info",
            "container_cpu_usage_seconds_total",
            "container_last_seen",
            "container_memory_max_usage_bytes",
            ...
        ]
    }
    
  4. Add the following snippet to the bottom of your prometheus.yml file. Check the example in docker-compose/prometheus-original.yml

     ---
     name: frigga
     exclude_jobs: []
    
  5. frigga - Use the .metrics.json file to apply the rules to your existing prometheus.yml

    $ frigga pa
    
    # pa is prometheus-apply, or pam-tada-dam
    
    Prom yaml path [docker-compose/prometheus.yml]: /etc/prometheus/prometheus.yml
    Metrics json path [./.metrics.json]: /home/willywonka/.metrics.json
    >> [LOG] Reading documents from docker-compose/prometheus.yml
    ...
    >> [LOG] Done! Now reload docker-compose/prometheus.yml with 'frigga pr -u http://localhost:9090'
    
  6. As mentioned in the previous step, reload the prometheus.yml to Prometheus, here are two ways of doing it

    • "Kill" Prometheus
      $ docker exec $PROM_CONTAINER_NAME kill -HUP 1
      
    • Send a POST request to /-/reload - this requires Prometheus to be loaded with --web.enable-lifecycle, for example, see docker-compose.yml
      $ frigga prometheus-reload --prom-url http://localhost:9090
      
      Or with curl
      $ curl -X POST http://localhost:9090/-/reload
      
  7. Make sure the prometheus.yml was loaded successfully to Prometheus

    $ docker logs --tail 10 $PROM_CONTAINER_NAME
    
     level=info ts=2020-06-27T15:45:34.514Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
     level=info ts=2020-06-27T15:45:34.686Z caller=main.go:827 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
    
  8. Grafana - Now check frigga - Jobs Usage dashboard, the numbers should be signifcantly lower (up to 60% or even more)

Test it locally

Requirements

  1. Docker
  2. docker-compose
  3. jq

Getting Started

  1. git clone this repository

  2. Run Docker daemon (Docker for Desktop)

  3. Make sure ports 3000,8080,9100 are not in use (state=closed)

    docker run --rm -it --network=host unfor19/net-tools nmap -p 8080,3000,9100 -n localhost
    
  4. Deploy locally the services: Prometheus, Grafana, node-exporter and cadvisor

    $ bash docker-compose/deploy_stack.sh
    
    Creating network "frigga_net1" with the default driver
    ...
    >> Grafana - Generating API Key - for Viewer
    eyJrIjoiT29hNGxGZjAwT2hZcU1BSmpPRXhndXVwUUE4ZVNFcGQiLCJuIjoibG9jYWwiLCJpZCI6MX0=
    # Save this key ^^^
    
  5. Open your browser, navigate to http://localhost:3000

    • Username and password are admin:admin
    • You'll be prompted to update your password, so just keep using admin or hit Skip
  6. Go to Jobs Usage dashboard, you'll see that Prometheus is processing ~2800 DataSeries

  7. Get all the metrics that are used in your Grafana dasboards

    $ export GRAFANA_API_KEY=the-key-that-was-generated-in-the-deploy-locally-step
    $ frigga gl -gurl http://localhost:3000 -gkey $GRAFANA_API_KEY
    
    >> [LOG] Getting the list of words to ignore when scraping from Grafana
    ...
    >> [LOG] Found a total of 269 unique metrics to keep
    # Generated .metrics.json in pwd
    
  8. Check the number of data series BEFORE filtering with frigga

    $ frigga pg -u http://localhost:9090
    
    # prometheus-get
    
    >> [LOG] Total number of data-series: 1863
    
  9. Apply the rules to prometheus.yml, keep the defaults

    $ frigga pa
    
    # prometheus-apply
    
    Prom yaml path [docker-compose/prometheus.yml]:
    Metrics json path [./.metrics.json]:
    ...
    >> [LOG] Done! Now reload docker-compose/prometheus.yml with 'docker exec $PROM_CONTAINER_NAME kill -HUP 1'
    
  10. Reload prometheus.yml to Prometheus

    $  frigga pr -u http://localhost:9090
    
    # prometheus-reload
    
    >> [LOG] Successfully reloaded Prometheus - http://localhost:9090/-/reload
    
  11. Check the number of data series AFTER filtering with frigga

    $ frigga pg -u http://localhost:9090
    
    # prometheus-get
    
    >> [LOG] Total number of data-series: 898
    # Decreased from 1863 to 898,  decreased 51% !
    
  12. Go to Jobs Usage, you'll see that Prometheus is processing only ~898 DataSeries (previously ~1863)

    • In case you don't see the change, don't forget to hit the refersh button
  13. Cleanup

    $ docker-compose -p frigga --file docker-compose/docker-compose.yml down
    

Pros and Cons of this tool

Pros

  1. Grafana-Cloud - As a Grafana Cloud customer, the main reason for writing this tool was lowering the costs. This goal was achieved by sending only the relevant DataSeries to Grafana Cloud
  2. Saves disk-space on the machine running Prometheus
  3. Improves PromQL performance by querying less metrics; significant only when processing high volumes

Cons

  1. After applying the rules in prometheus.yml, it makes the file less readable. Due to the fact it's not a file that you play with on a daily basis, it's okayish
  2. The memory usage of Prometheus increases slightly, around ~30MB, not significant, but worth mentioning
  3. If you intend to use more metrics, for example, you've added a new dashboard which uses more metrics, you'll need to do the same process again; frigga gl and frigga pa

References

Contributing

Report issues/questions/feature requests on the Issues section.

Pull requests are welcome! Ideally, create a feature branch and issue for every single change you make. These are the steps:

  1. Fork this repo
  2. Create your feature branch from master (git checkout -b my-new-feature)
  3. Install from source
     $ git clone https://github.com/${GITHUB_OWNER}/frigga.git && cd frigga
     ...
     $ pip install --upgrade pip
     ...
     $ python -m venv ./ENV
     $ . ./ENV/bin/activate
     ...
     $ (ENV) pip install --editable .
     ...
     # Done! Now when you run 'frigga' it will get automatically updated when you modify the code
    
  4. Add the code of your new feature
  5. Test - make sure frigga grafana-list and frigga prometheus-apply commands work
  6. Commit your remarkable changes (git commit -am 'Added new feature')
  7. Push to the branch (git push --set-up-stream origin my-new-feature)
  8. Create a new Pull Request and tell us about your changes

Authors

Created and maintained by Meir Gabay

License

This project is licensed under the MIT License - see the LICENSE file for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

frigga-1.1.1.tar.gz (20.8 kB view details)

Uploaded Source

Built Distribution

frigga-1.1.1-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file frigga-1.1.1.tar.gz.

File metadata

  • Download URL: frigga-1.1.1.tar.gz
  • Upload date:
  • Size: 20.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.7.0 requests/2.25.1 setuptools/56.0.0 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.9.5

File hashes

Hashes for frigga-1.1.1.tar.gz
Algorithm Hash digest
SHA256 1cd1f16dcc0080ca2b5759f9341d9f7fbccde20463c90b61bedb53d93b4664bd
MD5 dbd97907d853b38536311ab97f86c68d
BLAKE2b-256 25970dd30f581a9943d853776372087875edaa028d167a41375f8e74de41edbf

See more details on using hashes here.

File details

Details for the file frigga-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: frigga-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 17.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.7.0 requests/2.25.1 setuptools/56.0.0 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.9.5

File hashes

Hashes for frigga-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0786e5e918b8a0549aaf3b0c090a2baad30e168d8d09f881a8f1fb17f40e1af5
MD5 83a2b160403084e16afb749289bc5abe
BLAKE2b-256 3737a171ce40a2fb5fa48bfbe9beb945acb830a4dfd727913bee1b99e14737c8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page