Skip to main content

Send Netatmo Thermostat data to InfluxDB.

Project description

Netatmo2InfluxDB

Netatmo2InfluxDB supports the import of multiple Netatmo Thermostats across multiple houses into InfluxDB.

How it works

By using the Netatmo Developer API, we can fetch data using their api/homesdata and api/getroommeasure endpoints (see: netatmo2influxdb/data.py). The api/getroommeasure endpoint allows a granularity of 30 minutes, which is fine for our application.

To be able to actually retrieve data, we first have to gather access and refresh tokens. We get those for a given username using their oauth2/token endpoint (see: netatmo2influxdb/tokens.py). This requires a one-time use of the user's password. The subsequent tokens are then stored in a small sqlite database (netatmo.db).

In a normal use-case we want to retrieve all data there is. For this we use the command-line argument --all. This retrieves all home-IDs and related room-IDs (see: netatmo2influxdb/data.py:get_home and netatmo2influxdb/parser.py), and collects the data. Because we don't want to fetch new data every single time, all the import records (username, home_id, room_id, start_ts, end_ts, count) are stored.

To store the data in InfluxDB, some small changes have to be made to the data. We parse the epoch timestamp into dt.isoformat and unpack the temperature. By using the provided InfluxDB SeriesHelper, we make sure we only send packages of 512 records at a time. The following tags are included by default: user, home_name, home_id, room_name, room_id.

Because some of us like to have extra tags in our InfluxDB database measurements, this capability is added with the --custom-tags argument. Just add space-separated tag:value's (see netatmo2influxdb/store.py:**custom_tags).

If you want to play around, The --interactive argument was added. This makes sure all your CLI arguments are parsed, but nothing is actually run. Use like this: pipenv run python -it app {username} {args} --interactive. You can also do a dry run with --dry. This makes sure nothing gets stored locally or in the InfluxDB instance.

Install

Install with pipenv, run pipenv install

Make sure to copy .env.tpl -> .env and add the appropriate values.

CLI

Run the application with pipenv run app {netatmo username} {--all or --home ...} {optional arguments}

To get insight in what houses and thermostats are available, run pipenv run app {netatmo username} --get-home.

usage: app [-h] [--home [home_id [['room_id'] ...]]]
           [--custom-tags [tag:value [tag:value ...]]] [--get-home] [--all]
           [--dry] [--clear-db] [--interactive]
           user

Gather thermostat data from Netatmo

positional arguments:
  user                  User to parse

optional arguments:
  -h, --help            show this help message and exit
  --home [home_id [['room_id'] ...]]
                        Homes and rooms to parse. Use format --home {home_id_1} {room_id_1} {room_id2} ... --home {home_id_2} ...
  --custom-tags [tag:value [tag:value ...]]
                        Provide custom tags for InfluxDB. Format: --custom-tags tag:value tag:value
  --get-home            Get home and room information
  --all                 Parse all homes and rooms
  --dry                 Do a dry-run (don't store in InfluxDB)
  --clear-db            Wipes database from users and import history
  --interactive         Allows interactive use (ignores all other args)

Use with Docker

Because I run all my internal tools in Docker, here is a brief description on how to get up and running.

Build the Docker Image

docker build -t netatmo2influxdb .

Use with Docker Crontab

Using Docker Crontab we can start the container every 30 minutes and shut it down after we're done getting our data.

Use and adjust the following config.json file for Crontab:

[{
    "schedule":"@every 30m",
    "image":"netatmo2influxdb",
    "dockerargs": "-d \
    --env-file /location/of/.env \
    -v /location/of/netatmo.db:/netatmo2influxdb/netatmo.db",
    "command":"python app [USERNAME] --all & shutdown -h now"
}]

If you don't have Crontab running yet, use the following command to run the container:

docker run -d \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    -v ./env:/opt/env:ro \
    -v /path/to/config/dir:/opt/crontab:rw \
    -v /path/to/logs:/var/log/crontab:rw \
    willfarrell/crontab:latest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

netatmo2influxdb-0.1.0.tar.gz (12.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

netatmo2influxdb-0.1.0-py2.py3-none-any.whl (12.4 kB view details)

Uploaded Python 2Python 3

File details

Details for the file netatmo2influxdb-0.1.0.tar.gz.

File metadata

  • Download URL: netatmo2influxdb-0.1.0.tar.gz
  • Upload date:
  • Size: 12.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.7.3

File hashes

Hashes for netatmo2influxdb-0.1.0.tar.gz
Algorithm Hash digest
SHA256 582b7f77a6c833c643e627df48222823bb05af8eaed3a6903d4ec62605d763b7
MD5 b39d2af5e236fe100114484cf1a4799b
BLAKE2b-256 c1b5fa035601d7a8085f248dc135045a05de2a094c6f2d96540c7345a2aed841

See more details on using hashes here.

File details

Details for the file netatmo2influxdb-0.1.0-py2.py3-none-any.whl.

File metadata

  • Download URL: netatmo2influxdb-0.1.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 12.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.7.3

File hashes

Hashes for netatmo2influxdb-0.1.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 a94be120e4952c4d6c33b00c6379dda2757d5d17e4b90a388816f10afad494f6
MD5 1ac5998846c8d2dc5a74eec40d44b793
BLAKE2b-256 0e954a15506e28804cf83c43a95b418c01d913b384384c6bf2889419e5aa75e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page