Skip to main content

Database incremental exports, transfers, imports, ETL, creation / management

Project description

A Python/CLI tool for:

  1. Exporting database tables to compressed CSV files.
  2. Transferring tables from from one database server to another.
  3. Loading database data (from both files and Python)
  4. Creating/Managing Postgresql/TimescaleDB tables, views, materialized views, functions, procedures, continuous aggregates, scheduled tasks.
  5. Checking for mismatched attributes between SQLAlchemy tables/models and actual tables in a database.

Currently only Postgresql and Postgresql-based databases (e.g. TimescaleDB) are supported.

Install

pip install dbflows

If using the export functionality (export database tables to compressed CSV files), then you will additionally need to have the psql executable available. To install psql:

# enable PostgreSQL package repository
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo tee /etc/apt/trusted.gpg.d/pgdg.asc &>/dev/null
# replace `16` with the major version of your database
sudo apt update && sudo apt install -y postgresql-client-16

Export

Features:

  • File splitting. Create separate export files based on a 'slice column' (an orderable column. e.g. datetime, integer, etc) and/or 'partition column' (a categorical column. e.g. name string).
  • Incremental exports (export only data not yet exported). This works for both single file and multiple/split file output.

Examples

from dbflows import export
import sqlalchemy as sa
# the table to export data from
my_table = sa.Table(
    "my_table", 
    sa.MetaData(schema="my_schema"), 
    sa.Column("inserted", sa.DateTime),
    sa.Column("category", sa.String),
    sa.Column("value", sa.Float),
)
# one or more save locations (2 in this case)
save_locs = ["s3://my-bucket/my_table_exports", "/path/to/local_dir/my_table_exports"]
# database URL
url = "postgres://user:password@hostname:port/database-name"

Export entire table to a single file.

export(
    table=my_table,
    engine=url, # or sa.engine
    save_locs=save_locs
)

CLI equivalent:

db export table \
my_table.my_schema \
postgres://user:password@hostname:port/database-name` \
s3://my-bucket/my_table_exports \
/path/to/local_dir/my_table_exports

Export 500 MB CSVs, sorted and sliced on inserted datetime column.

export(
    table=my_table,
    engine=url, # or sa.engine
    save_locs=save_locs,
    slice_column=my_table.c.inserted,
    file_max_size="500 MB"
)

Create a CSV export for each unique category in table.

export(
    table=my_table,
    engine=url, # or sa.engine
    save_locs=save_locs,
    partition_column=my_table.c.category
)

CLI equivalent:

db export table \
my_table.my_schema \
postgres://user:password@hostname:port/database-name` \
# save to one or more locations (s3 paths or local)
s3://my-bucket/my_table_exports \ 
/path/to/local_dir/my_table_exports \ 
--partition-column category # or "-p category"

export 500 MB CSVs for each unique category, sorted and sliced on inserted datetime column.

export(
    table=my_table,
    engine=url, # or sa.engine
    save_locs=save_locs,
    slice_column=my_table.c.inserted,
    file_max_size="500 MB",
    partition_column=my_table.c.category,
)

Loading/Importing

Loading from Python objects

Create a PgLoader instance for your table and use the load method to load batches of rows.

Loading from CSV files

Use import_csvs to load CSV with parallel worker threads. This internally uses timescaledb-parallel-copy which can be installed with: go install github.com/timescale/timescaledb-parallel-copy/cmd/timescaledb-parallel-copy@latest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

dbflows-0.2.3-py3-none-any.whl (55.6 kB view details)

Uploaded Python 3

File details

Details for the file dbflows-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: dbflows-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 55.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for dbflows-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 3fa9202622c1d8f17b88ca0cdbfab7f91df9b46cb3aed2f41edfd35dfeddaff8
MD5 c40dc4f0009fc51eeefb9a9a5b8c89bc
BLAKE2b-256 dabc30380606fdab9e0751d31351a0ced54fae084992b1a046cf0538c3a04563

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page