Skip to main content

A TimescaleDB client library with environment-based config and time-series data handling.

Project description

ProgressivePostgres

ProgressivePostgres is a Python client library for interacting with a TimescaleDB (or PostgreSQL) instance, configured via environment variables. It integrates seamlessly with the Zeitgleich library for improved time-series handling and includes optional features for bridging MQTT data ingestion through SwampClient.

ProgressivePostgres provides:

  • Time-Series Data Models — Integration with TimeSeriesData and MultiOriginTimeSeries.
  • Environment-Based Configuration — Minimizes boilerplate; simply use a .env file.
  • Automatic Table Creation — Optionally create hypertables for your data if they do not already exist.
  • Extra Columns Handling — Decide how to manage columns beyond the expected set: ignore, error, or append.
  • Asynchronous MQTT Integration — If combined with an MQTT client, seamlessly push sensor data into TimescaleDB.

Table of Contents


Features

  • Simple DB Client: Quickly execute queries, insert data, and retrieve time-series rows.
  • Timestamp Normalization: TimeSeriesData can parse and convert timestamps among multiple formats (ISO, RFC3339, UNIX, etc.).
  • Automatic Hypertable Creation: Create TimescaleDB hypertables on-the-fly if desired.
  • Multi-Origin Data Model: Use MultiOriginTimeSeries for simultaneously managing data from multiple sensors or devices.
  • Optional MQTT Bridge: Combine with an MQTT client for real-time sensor data ingestion.

Installation

Install ProgressivePostgres (and any needed dependencies) via:

pip install ProgressivePostgres

or clone the repository locally with the provided Makefile:

make install

Configuration

ProgressivePostgres uses environment variables for configuration, read from a prefix that you pass to the Client(name="TS") constructor.

Environment Variables

Variable Description Default Options
{PREFIX}_DB_HOST Hostname for TimescaleDB/PostgreSQL. localhost
{PREFIX}_DB_PORT Port for TimescaleDB. 5432
{PREFIX}_DB_NAME Database name. timescale
{PREFIX}_DB_USER Username for DB authentication. postgres
{PREFIX}_DB_PASS Password for DB authentication. None
{PREFIX}_DB_AUTOCOMMIT Whether to auto-commit each statement. true true, false
{PREFIX}_LOG_LEVEL Log level. DEBUG DEBUG, INFO, WARNING, ERROR
{PREFIX}_ORIGIN_SPLIT_CHAR Char used to split origins (e.g. machine1/sensorA). /
{PREFIX}_ORIGIN_JOIN_CHAR Char used to join origin parts for table naming. /
{PREFIX}_TIMESTAMP_COLUMN Name of the time column in DB. timestamp
{PREFIX}_VALUE_COLUMN Name of the primary value column. value
{PREFIX}_CREATE_TABLES_IF_NOT_EXIST Automatically create hypertables if missing. true true, false
{PREFIX}_EXTRA_COLUMNS_HANDLING Handling of extra columns. append ignore, error, append

Usage & Examples

To see working code samples, please refer to the examples directory in this repository. Highlights include:

  • Basic Example: Demonstrates how to connect to a local TimescaleDB instance, run simple queries, and handle .env environment variables.
  • MQTT Logger Example: Combines ProgressivePostgres with an MQTT client, pushing messages from topics into TimescaleDB.
  • Zeitgleich Example: Showcases using TimeSeriesData and MultiOriginTimeSeries for multi-sensor data insertion and retrieval.

Example .env

A typical .env file might look like:

TS_DB_HOST="localhost"
TS_DB_PORT="5444"
TS_DB_NAME="timeseries"
TS_DB_USER="postgres"
TS_DB_PASS="pwd"
TS_LOG_LEVEL="DEBUG"
TS_ORIGIN_SPLIT_CHAR="/"
TS_ORIGIN_JOIN_CHAR="_"
TS_TIMESTAMP_COLUMN="timestamp"
TS_VALUE_COLUMN="value"
TS_EXTRA_COLUMNS_HANDLING="append"
TS_CREATE_TABLES_IF_NOT_EXIST="true"

Load these environment variables in your Python script using python-dotenv:

from dotenv import load_dotenv

load_dotenv()

Then instantiate a client:

from ProgressivePostgres import Client

client = Client(name="TS")  # "TS" will be the prefix for env variables
# ...

TODOs

  • Automatic Migrations: Provide tools to manage schema migrations automatically.
  • Advanced Query Builder: Add an optional query builder for more complex queries (joins, filters, etc.).
  • Transaction Handling: More robust transaction management (automatic rollback on certain errors).
  • Comprehensive Testing: Add unit and integration tests across various DB versions.
  • Enhanced MQTT Integration: Provide additional examples.

License

Licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

progressivepostgres-0.0.7.tar.gz (162.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

progressivepostgres-0.0.7-py3-none-any.whl (11.2 kB view details)

Uploaded Python 3

File details

Details for the file progressivepostgres-0.0.7.tar.gz.

File metadata

  • Download URL: progressivepostgres-0.0.7.tar.gz
  • Upload date:
  • Size: 162.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for progressivepostgres-0.0.7.tar.gz
Algorithm Hash digest
SHA256 4da351d5bd8d51c8b217029b4568ebafb605fb097e7fb02d3970317c2222ace2
MD5 40fe74a1e93829598531f54929895753
BLAKE2b-256 cc01086875911a37a18dcfcd1f5fb44decfe7f68cc1861a9a9774b5e7dbfb983

See more details on using hashes here.

File details

Details for the file progressivepostgres-0.0.7-py3-none-any.whl.

File metadata

File hashes

Hashes for progressivepostgres-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 f5df00e2d44bf0d68accc56445d7075ca7b2ed4c54781dc77916837af1f90574
MD5 f2d6eb95a1bec85935c09cc061718aa4
BLAKE2b-256 a32daf3788eaddf9ba14d052a87375d17205474e47c50a5dd1044ad5caec2847

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page