Skip to main content

Postgres insert update with pandas DataFrames.

Project description

CircleCI codecov PyPI version

pangres

pangres logo

Thanks to freesvg.org for the logo assets

Upsert with pandas DataFrames (ON CONFLICT DO NOTHING or ON CONFLICT DO UPDATE) for PostgreSQL, MySQL, SQlite and potentially other databases behaving like SQlite (untested) with some additional optional features (see features). Also handles the creation of non existing SQL tables and schemas.

Features

  1. (optional) Automatical column creation (when a column exists in the DataFrame but not in the SQL table).
  2. (optional) Automatical column type alteration for columns that are empty in the SQL table (except for SQlite where alteration is limited).
  3. Creates the table if it is missing.
  4. Creates missing schemas in Postgres (and potentially other databases that have a schema system).
  5. JSON is supported (with pd.to_sql it does not work) with some exceptions (see Caveats).
  6. Fast (except for SQlite where some help is needed).
  7. Will work even if not all columns defined in the SQL table are there.
  8. SQL injection safe (schema, table and column names are escaped and values are given as parameters).

Tested with

  • Python 3.7.3 and Python 3.8.0
  • MySQL 5.7.29 using pymysql 0.9.3
  • PostgreSQL 9.6.17 using psycopg2 2.8.4
  • SQlite 3.28.0 using sqlite3 2.6.0

Gotchas and caveats

All flavors

  1. We can't create JSON columns automatically but we can insert JSON like objects (list, dict) in existing JSON columns.

Postgres

  1. "%", ")" and "(" in column names will most likely cause errors with PostgreSQL (this is due to psycopg2 and also affect pd.to_sql). Use the function pangres.fix_psycopg2_bad_cols to "clean" the columns in the DataFrame. You'll also have to rename columns in the SQL table accordingly (if the table already exists).
  2. Even though we only do data type alteration on empty columns, since we don't want to lose column information (e.g. constraints) we use true column alteration (instead of drop+create) so the old data type must be castable to the new data type. Postgres seems a bit restrictive in this regard even when the columns are empty (e.g. BOOLEAN to TIMESTAMP is impossible).

SQlite

  1. SQlite must be version 3.24.4 or higher! UPSERT syntax did not exist before.
  2. Column type alteration is not possible for SQlite.
  3. SQlite inserts can be at worst 5 times slower than pd.to_sql for some reasons. If you can help please contact me!
  4. Inserts with 1000 columns or more are not supported due to a restriction of 999 parameters per queries. One way to fix this would inserting the columns progressively but this seems quite tricky. If you know a better way please contact me.

MySQL

  1. MySQL will often change the order of the primary keys in the SQL table when using INSERT... ON CONFLICT.. DO NOTHING/UPDATE. This seems to be the expected behavior so nothing we can do about it but please mind that!
  2. You may need to provide SQL dtypes e.g. if you have a primary key with text you will need to provide a character length (e.g. VARCHAR(50)) because MySQL does not support indices/primary keys with flexible text length. pd.to_sql has the same issue.

Notes

This is a library I was using in production in private with very good results and decided to publish.

Ideally such features will be integrated into pandas since there is already a PR on the way) and I would like to give the option to add columns via another PR.

There is also pandabase which does almost the same thing (plus lots of extra features) but my implementation is different. Btw big thanks to pandabase and the sql part of pandas which helped a lot.

Installation

pip install pangres

Additionally depending on which database you want to work with you will need to install the corresponding library (note that SQlite is included in the standard library):

  • Postgres
pip install psycopg2
  • MySQL
pip install pymysql

Usage

Head over to pangres' wiki!

Contributing

Pull requests/issues are welcome.

Testing

You will need a SQlite, MySQL and Postgres database available for testing.

Clone pangres then set your curent working directory to the root of the cloned repository folder. Then use the commands below. You will have to replace the following variables in those commands:

  • SQLITE_CONNECTION_STRING: replace with a SQlite sqlalchemy connection string (e.g. "sqlite:///test.db")
  • POSTGRES_CONNECTION_STRING: replace with a Postgres sqlalchemy connection string (e.g. "postgres:///user:password@localhost:5432/database"). Specifying schema is optional for postgres (will default to public).
  • MYSQL_CONNECTION_STRING: replace with a MySQL sqlalchemy connection string (e.g. "mysql+pymysql:///user:password@localhost:3306/database")
# 1. Create and activate the build environment
conda env create -f environment.yml
conda activate pangres-dev
# 2. Install pangres in editable mode (changes are reflected upon reimporting)
pip install -e .
# 3. Run pytest
# -s prints stdout
# -v prints test parameters
# --cov=./pangres shows coverage only for pangres
pytest -s -v pangres --cov=./pangres --conn_string=$SQLITE_CONNECTION_STRING
pytest -s -v pangres --cov=./pangres --conn_string=$POSTGRES_CONNECTION_STRING --schema=tests
pytest -s -v pangres --cov=./pangres --conn_string=$MYSQL_CONNECTION_STRING

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pangres-2.tar.gz (23.4 kB view details)

Uploaded Source

File details

Details for the file pangres-2.tar.gz.

File metadata

  • Download URL: pangres-2.tar.gz
  • Upload date:
  • Size: 23.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.8.0 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for pangres-2.tar.gz
Algorithm Hash digest
SHA256 c637be46cb35a863c7ff42473e9605c4c5c0f0a8b1e2feea463ced561452a5cd
MD5 8341b1d5d5e5c0619df23c9dc176b225
BLAKE2b-256 4111911563f4d4c374abc8ee5f72aab6eb2b39b54eeed8ca181d011ab2fa132f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page