Skip to main content

InfluxDB storage plugin for Graphite-API

Project description

InfluxGraph

An InfluxDB storage plugin for Graphite-API. Graphite with InfluxDB data store from any kind of schema(s) used in the DB.

Latest Version CI status https://codecov.io/gh/InfluxGraph/influxgraph/branch/master/graph/badge.svg Documentation Status https://img.shields.io/pypi/wheel/influxgraph.svg https://img.shields.io/pypi/pyversions/influxgraph.svg

This project started as a re-write of graphite influxdb, now a separate project.

Installation

Docker Compose

In compose directory can be found docker-compose configuration that will spawn all necessary services for a complete monitoring solution with:

  • InfluxDB

  • Telegraf

  • Graphite API with InfluxGraph

  • Grafana dashboard

To use, within compose directory run:

docker-compose up

Grafana will be running on http://localhost:3000 with Graphite datasource for InfluxDB data available at http://localhost. Add a new Graphite datasource to Grafana - default Grafana user/pass is admin/admin - to create dashboards with.

See compose configuration readme for more details.

Docker Image

docker pull influxgraph/influxgraph
docker create  --name=influxgraph -p 8000:80 influxgraph/influxgraph
docker start influxgraph

There will now be a Graphite-API running on localhost:8000 from the container with a default InfluxDB configuration and memcache enabled. Finder expects InfluxDB to be running on localhost:8086 by default.

The image will use a supplied graphite-api.yaml on build, when docker build is called on an InfluxGraph image.

Docker file used to build container can be found under docker directory of the repository.

Manual Installation

pip install influxgraph

Use of a local memcached service is highly recommended - see configuration section on how to enable.

Mimimal configuration for Graphite-API is below. See Full Configuration Example for all possible configuration options.

/etc/graphite-api.yaml

finders:
  - influxgraph.InfluxDBFinder

See the Wiki and Configuration section for details.

Main features

  • InfluxDB Graphite template support - expose InfluxDB tagged data as Graphite metrics with configurable metric paths

  • Dynamically calculated group by intervals based on query date/time range - fast queries regardless of the date/time they span

  • Configurable per-query aggregation functions by regular expression pattern

  • Configurable per-query retention policies by query date/time range. Automatically use pre-calculated downsampled data in a retention policy for historical data

  • Fast in-memory index for Graphite metric path queries as a Python native code extension

  • Multi-fetch enabled - fetch data for multiple metrics with one query to InfluxDB

  • Memcached integration

  • Python 3 and PyPy compatibility

  • Good performance even with extremely large number of metrics in the DB - generated queries are guaranteed to have O(1) performance characteristics

Google User’s Group

There is a Google user’s group for discussion which is open to the public.

Goals

  • InfluxDB as a drop-in replacement data store to the Graphite query API

  • Backwards compatibility with existing Graphite API clients like Grafana and Graphite installations migrated to InfluxDB data stores using Graphite input service with or without Graphite template configuration

  • Expose native InfluxDB line protocol ingested data via the Graphite API

  • Clean, readable code with complete documentation for public endpoints

  • Complete code coverage with both unit and integration testing. Code has >90% test coverage and is integration tested against a real InfluxDB service

  • Good performance at large scale. InfluxGraph is used in production with good performance on InfluxDB nodes with cardinality exceeding 5M and a write rate of over 5M metrics/minute or 66K/second.

The first three goals provide both

  • A backwards compatible migration path for existing Graphite installations to use InfluxDB as a drop-in storage back-end replacement with no API client side changes required, meaning existing Grafana or other dashboards continue to work as-is.

  • A way for native InfluxDB collection agents to expose their data via the Graphite API which allows the use of any Graphite API talking tool, the plethora of Graphite API functions, custom functions, functions across series, multi-series plotting and functions via Graphite glob expressions et al.

As of this time of writing, no alternatives exist with similar functionality, performance and compatibility.

Non-Goals

  • Graphite-Web support from the official Graphite project

Dependencies

With the exception of InfluxDB itself, the other dependencies are installed automatically by pip.

  • influxdb Python module

  • Graphite-API

  • python-memcached Python module

  • InfluxDB service, versions 1.0 or higher

InfluxDB Graphite metric templates

InfluxGraph can make use of any InfluxDB data and expose them as Graphite API metrics, as well as make use of Graphite metrics added to InfluxDB as-is sans tags.

Even data written to InfluxDB by native InfluxDB API clients can be exposed as Graphite metrics, allowing transparent to clients use of the Graphite API with InfluxDB acting as its storage back-end.

To make use of tagged InfluxDB data, the finder needs to know how to generate a Graphite metric path from the tags used by InfluxDB.

The easiest way to do this is to use the Graphite service in InfluxDB with configured templates which can be used as-is in InfluxGraph configuration - see Full Configuration Example section for details. This presumes existing collection agents are using the Graphite line protocol to write to InfluxDB via its Graphite input service.

If, on the other hand, native InfluxDB metrics collection agents like Telegraf are used, that data can too be exposed as Graphite metrics by writing appropriate template(s) in Graphite-API configuration alone.

See Telegraf default configuration template for an example of this.

By default, the storage plugin makes no assumptions that data is tagged, per InfluxDB default Graphite service template configuration as below:

[[graphite]]
  <..>
  # templates = []

Retention policy configuration

Pending implementation of a feature request that will allow InfluxDB to select and/or merge results from down-sampled data as appropriate, retention policy configuration is needed to support the use-case of down-sampled data being present in non default retention policies:

retention_policies:
    <time interval of query>: <retention policy name>

For example, to make a query with a group by interval of one minute or less, interval above one and less than thirty minutes and interval thirty minutes or above use the retention policies named default, 10min and 30min respectively:

retention_policies:
    60: default
    600: 10min
    1800: 30min

While not required, retention policy interval is best kept close to or identical to deltas interval for best influx query performance.

See Full Configuration Example file for additional details.

Configuration

Minimal Configuration

In graphite-api config file at /etc/graphite-api.yaml:

finders:
  - influxgraph.InfluxDBFinder

The folowing default Graphite-API configuration is used if not provided:

influxdb:
   db: graphite

Full Configuration Example

See Graphite-API example configuration file for a complete configuration example.

Aggregation function configuration

The finder supports configurable aggregation and selector functions to use per metric path regular expression pattern. This is the equivalent of storage-aggregation.conf in Graphite’s carbon-cache.

Default aggregation function used is mean if no configuration provided nor any matching configuration.

InfluxGraph has pre-defined aggregation configuration matching carbon-cache defaults, namely:

aggregation_functions:
    \.min$ : min
    \.max$ : max
    \.last$ : last
    \.sum$ : sum

Defaults are overridden if aggregation_functions is configured in graphite-api.yaml as shown in configuration example.

An error will be printed to stderr if a configured aggregation function is not a known valid InfluxDB aggregation or selector method per InfluxDB function list.

Transformation functions, for example derivative, may _not_ be used as they require a separate aggregation to be performed. Transformations are performed by Graphite-API instead, which also supports pluggable functions.

Known InfluxDB aggregation and selector functions are defined at influxgraph.constants.INFLUXDB_AGGREGATIONS and can be overriden if necessary.

Memcached InfluxDB

Memcached can be used to cache InfluxDB data so the Graphite-API can avoid querying the DB if it does not have to.

TTL configuration for memcache as shown in Full Configuration Example is only for InfluxDB series list with data query TTL set to the grouping interval used.

For example, for a query spanning twenty-four hours, a group by interval of one minute is used by default. TTL for memcache is set to one minute for that query.

For a query spanning one month, a fifteen minute group by interval is used by default. TTL is also set to fifteen minutes for that query.

Calculated intervals

A data group by interval is automatically calculated depending on the date/time range of the query. This keeps data size in check regardless of query range and speeds up graph generation for large ranges.

Default configuration mirrors what Grafana uses with the native InfluxDB API.

Overriding the automatically calculated intervals can be done via the optional deltas configuration. See Full Configuration Example file for all supported configuration options.

Unlike other Graphite compatible data stores, InfluxDB performs aggregation on data query, not on ingestion. Queries made by InfluxGraph are therefore always aggregation queries with a group by clause.

Users that wish to retrieve all, non-aggregated, data points regardless of date/time range are advised to query InfluxDB directly.

Varnish caching InfluxDB API

The following is a sample configuration of Varnish as an HTTP cache in front of InfluxDB’s HTTP API. It uses Varnish’s default TTL of 60 sec for all InfluxDB queries.

The intention is for a local (to InfluxDB) Varnish service to cache frequently accessed data and protect the database from multiple identical requests, for example multiple users viewing the same dashboard.

InfluxGraph configuration should use Varnish port to connect to InfluxDB.

Unfortunately, given that clients like Grafana use POST requests for querying the Graphite API, which cannot be cached, using Varnish in front of a Graphite-API webapp would have no effect. Multiple requests for the same dashboard/graph will therefore still hit Graphite-API, but with Varnish in front of InfluxDB the more sensitive DB is spared from duplicated queries.

Substitute the default 8086 backend port with the InfluxDB API port for your installation if needed:

backend default {
  .host = "127.0.0.1";
  .port = "8086";
}

sub vcl_recv {
  unset req.http.cookie;
}

Graphite API example configuration:

finders:
  - influxgraph.InfluxDBFinder
influxdb:
  port: <varnish port>

Where <varnish_port> is Varnish’s listening port.

Any other HTTP caching service will similarly work just as well.

Optional C Extensions

In order of fastest to slowest, here is how the supported interpreters fare with and without C extensions. How much faster depends largely on hardware and compiler used - can expect at least 15x and 4x performance increases for CPython with extensions and PyPy respectively compared to standard CPython without extensions.

CPython with extensions will also use about 20x less memory for the index than either PyPy or CPython without extensions.

  1. CPython with C extensions

  2. Pypy

  3. CPython

There are two performance tests in the repository that can be used to see relative performance with and without extensions, for index and template functionality respectively. On PyPy extensions are purposefully disabled.

Known Limitations

Data fill parameter and counter values

Changed in version 1.3.6

As of version 1.3.6, the default fill parameter is null so as to not add values that do not exist in data - was previous in prior versions.

This default will break derivative calculated counter values when data sampling rate exceeds configured interval for the query - see Calculated intervals.

For example, with a data sampling rate of sixty (60) seconds and default deltas configuration, queries of thirty (30) minutes and below will use a thirty (30) second interval and will contain null datapoints. This in turn causes Graphite functions like derivative and non_negative_derivative to only contain null datapoints.

The fill parameter is configurable - see Full Configuration Example - but is currently common for all metric paths.

For derivative and related functions to work, either set deltas configuration to not go below data sampling rate or set fill configuration to previous.

Index for Graphite metric paths

The index implementation via native code extension releases Python’s GIL as much as possible, however, there will still be a response time increase while index is being re-built.

Without extensions response time increase will be much higher - building with extensions is highly recommended.

That said, building extensions can be disabled by running setup.py with the DISABLE_INFLUXGRAPH_CEXT=1 environment variable set. A notice will be displayed by setup.py that extensions have been disabled.

Note that without native extension, performance is much lower and memory use of index much higher.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

influxgraph-1.5.0.post1-cp27-cp27m-macosx_10_13_x86_64.whl (218.7 kB view details)

Uploaded CPython 2.7m macOS 10.13+ x86-64

File details

Details for the file influxgraph-1.5.0.post1-cp27-cp27m-macosx_10_13_x86_64.whl.

File metadata

  • Download URL: influxgraph-1.5.0.post1-cp27-cp27m-macosx_10_13_x86_64.whl
  • Upload date:
  • Size: 218.7 kB
  • Tags: CPython 2.7m, macOS 10.13+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.0 setuptools/40.5.0 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/2.7.15

File hashes

Hashes for influxgraph-1.5.0.post1-cp27-cp27m-macosx_10_13_x86_64.whl
Algorithm Hash digest
SHA256 e29f4f59a9a1284cc32477c20d0d45421677cbeb26e2574cb57847b619b207f0
MD5 1f2e4e37759ea81dc67d73675544b1e4
BLAKE2b-256 953eb41c95519db3ea05355f7663f71465cfdf011e38f152d4732fa5648db47c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page