Skip to main content

Send Netatmo Thermostat data to InfluxDB.

Project description

Netatmo2InfluxDB

Netatmo2InfluxDB supports the import of multiple Netatmo Thermostats across multiple houses into InfluxDB.

How it works

By using the Netatmo Developer API, we can fetch data using their api/homesdata and api/getroommeasure endpoints (see: netatmo2influxdb/data.py). The api/getroommeasure endpoint allows a granularity of 30 minutes, which is fine for our application.

To be able to actually retrieve data, we first have to gather access and refresh tokens. We get those for a given username using their oauth2/token endpoint (see: netatmo2influxdb/tokens.py). This requires a one-time use of the user's password. The subsequent tokens are then stored in a small sqlite database (netatmo.db).

In a normal use-case we want to retrieve all data there is. For this we use the command-line argument --all. This retrieves all home-IDs and related room-IDs (see: netatmo2influxdb/data.py:get_home and netatmo2influxdb/parser.py), and collects the data. Because we don't want to fetch new data every single time, all the import records (username, home_id, room_id, start_ts, end_ts, count) are stored.

To store the data in InfluxDB, some small changes have to be made to the data. We parse the epoch timestamp into dt.isoformat and unpack the temperature. By using the provided InfluxDB SeriesHelper, we make sure we only send packages of 512 records at a time. The following tags are included by default: user, home_name, home_id, room_name, room_id.

Because some of us like to have extra tags in our InfluxDB database measurements, this capability is added with the --custom-tags argument. Just add space-separated tag:value's (see netatmo2influxdb/store.py:**custom_tags).

If you want to play around, The --interactive argument was added. This makes sure all your CLI arguments are parsed, but nothing is actually run. Use like this: pipenv run python -it app {username} {args} --interactive. You can also do a dry run with --dry. This makes sure nothing gets stored locally or in the InfluxDB instance.

Install

Install with pipenv, run pipenv install

Make sure to copy .env.tpl -> .env and add the appropriate values.

CLI

Run the application with pipenv run app {netatmo username} {--all or --home ...} {optional arguments}

To get insight in what houses and thermostats are available, run pipenv run app {netatmo username} --get-home.

usage: app [-h] [--home [home_id [['room_id'] ...]]]
           [--custom-tags [tag:value [tag:value ...]]] [--get-home] [--all]
           [--dry] [--clear-db] [--interactive]
           user

Gather thermostat data from Netatmo

positional arguments:
  user                  User to parse

optional arguments:
  -h, --help            show this help message and exit
  --home [home_id [['room_id'] ...]]
                        Homes and rooms to parse. Use format --home {home_id_1} {room_id_1} {room_id2} ... --home {home_id_2} ...
  --custom-tags [tag:value [tag:value ...]]
                        Provide custom tags for InfluxDB. Format: --custom-tags tag:value tag:value
  --get-home            Get home and room information
  --all                 Parse all homes and rooms
  --dry                 Do a dry-run (don't store in InfluxDB)
  --clear-db            Wipes database from users and import history
  --interactive         Allows interactive use (ignores all other args)

Use with Docker

Because I run all my internal tools in Docker, here is a brief description on how to get up and running.

Build the Docker Image

docker build -t netatmo2influxdb .

Use with Docker Crontab

Using Docker Crontab we can start the container every 30 minutes and shut it down after we're done getting our data.

Use and adjust the following config.json file for Crontab:

[{
    "schedule":"@every 30m",
    "image":"netatmo2influxdb",
    "dockerargs": "-d \
    --env-file /location/of/.env \
    -v /location/of/netatmo.db:/netatmo2influxdb/netatmo.db",
    "command":"python app [USERNAME] --all & shutdown -h now"
}]

If you don't have Crontab running yet, use the following command to run the container:

docker run -d \
    -v /var/run/docker.sock:/var/run/docker.sock:ro \
    -v ./env:/opt/env:ro \
    -v /path/to/config/dir:/opt/crontab:rw \
    -v /path/to/logs:/var/log/crontab:rw \
    willfarrell/crontab:latest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

netatmo2influxdb-0.1.1.tar.gz (12.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

netatmo2influxdb-0.1.1-py2.py3-none-any.whl (12.4 kB view details)

Uploaded Python 2Python 3

File details

Details for the file netatmo2influxdb-0.1.1.tar.gz.

File metadata

  • Download URL: netatmo2influxdb-0.1.1.tar.gz
  • Upload date:
  • Size: 12.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.7.3

File hashes

Hashes for netatmo2influxdb-0.1.1.tar.gz
Algorithm Hash digest
SHA256 6dda8811ffbd375444c4780b6e684784d8c64b2773b025addb0cf35da3a83c52
MD5 3488e73f31d0fed1526aecc6b99d9f1a
BLAKE2b-256 9b35557c87db0b360c4e0e1330dc092cd774fbdce1987a4daa15bc5e3f6f4c7d

See more details on using hashes here.

File details

Details for the file netatmo2influxdb-0.1.1-py2.py3-none-any.whl.

File metadata

  • Download URL: netatmo2influxdb-0.1.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 12.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.7.3

File hashes

Hashes for netatmo2influxdb-0.1.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 6340a80625253042a8f1d3d72b0cd102f807920c4c3b8648a83728ed0b6743a3
MD5 c05bae2bf57c144b6d9cd3775730bb34
BLAKE2b-256 1def260bf37f523f048539bbdd23e35b903d76077714a6798f1ee6421cd404de

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page