Skip to main content

Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

Project description

Databank

PyPI GitHub Actions

Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

No ORM, no frills. Only raw SQL queries and parameter binding. Thread-safe. Built on top of SQLAlchemy.

IBM System/360 Model 91

(The photo was taken by Matthew Ratzloff and is licensed under CC BY-NC-ND 2.0.)

Installation

You can install the latest stable version from PyPI:

$ pip install databank

Adapters not included. Install e.g. psycopg2 for PostgreSQL:

$ pip install psycopg2

Usage

Connect to the database of your choice:

>>> from databank import Database
>>> db = Database("postgresql://user:password@localhost/db", pool_size=2)

The keyword arguments are passed directly to SQLAlchemy's create_engine() function. Depending on the database you connect to, you have options like the size of connection pools.

If you are using databank in a multi-threaded environment (e.g. in a web application), make sure the pool size is at least the number of worker threads.

Let's create a simple table:

>>> db.execute("CREATE TABLE beatles (id SERIAL PRIMARY KEY, member TEXT NOT NULL);")

You can insert multiple rows at once:

>>> params = [
...     {"id": 0, "member": "John"},
...     {"id": 1, "member": "Paul"},
...     {"id": 2, "member": "George"},
...     {"id": 3, "member": "Ringo"}
... ]
>>> db.execute_many("INSERT INTO beatles (id, member) VALUES (:id, :member);", params)

Fetch a single row:

>>> db.fetch_one("SELECT * FROM beatles;")
{'id': 0, 'member': 'John'}

But you can also fetch n rows:

>>> db.fetch_many("SELECT * FROM beatles;", n=2)
[{'id': 0, 'member': 'John'}, {'id': 1, 'member': 'Paul'}]

Or all rows:

>>> db.fetch_all("SELECT * FROM beatles;")
[{'id': 0, 'member': 'John'},
 {'id': 1, 'member': 'Paul'},
 {'id': 2, 'member': 'George'},
 {'id': 3, 'member': 'Ringo'}]

If you are using PostgreSQL with jsonb columns, you can use a helper function to serialize the parameter values:

>>> from databank.utils import serialize_params
>>> serialize_params({"member": "Ringo", "song": ["Don't Pass Me By", "Octopus's Garden"]})
{'member': 'Ringo', 'song': '["Don\'t Pass Me By", "Octopus\'s Garden"]'}

Executing Queries in the Background

For both execute() and execute_many() you can pass an in_background keyword argument (which is by default False). If set to True, the query will be executed in the background in another thread and the method will return immediately the Thread object (i.e. non-blocking). You can call join() on that object to wait for the query to finish or just do nothing and go on:

>>> db.execute("INSERT INTO beatles (id, member) VALUES (:id, :member);", {"id": 4, "member": "Klaus"}, in_background=True)
<Thread(Thread-1 (_execute), started 140067398776512)>

Beware that if you are using in_background=True, you have to make sure that the connection pool size is large enough to handle the number of concurrent queries and that your program is running long enough if you are not explicitly waiting for the thread to finish. Also note that this might lead to a range of other issues like locking, reduced performance or even deadlocks. You also might want to set an explicit timeout for queries by passing e.g. {"options": "-c statement_timeout=60000"} for PostgreSQL when initializing the Database object to kill all queries taking longer than 60 seconds.

Query Collection

You can also organize SQL queries in an SQL file and load them into a QueryCollection:

/* @name insert_data */
INSERT INTO beatles (id, member) VALUES (:id, :member);

/* @name select_all_data */
SELECT * FROM beatles;

This idea is borrowed from PgTyped

A query must have a header comment with the name of the query. If a query name is not unique, the last query with the same name will be used. You can parse that file and load the queries into a QueryCollection:

>>> from databank import QueryCollection
>>> queries = QueryCollection.from_file("queries.sql")

and access the queries like in a dictionary:

>>> queries["insert_data"]
'INSERT INTO beatles (id, member) VALUES (:id, :member);'
>>> queries["select_all_data"]
'SELECT * FROM beatles;'

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

databank-0.8.0.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

databank-0.8.0-py3-none-any.whl (7.7 kB view details)

Uploaded Python 3

File details

Details for the file databank-0.8.0.tar.gz.

File metadata

  • Download URL: databank-0.8.0.tar.gz
  • Upload date:
  • Size: 6.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-1023-azure

File hashes

Hashes for databank-0.8.0.tar.gz
Algorithm Hash digest
SHA256 67931a1a38902a838523a97fa44d29d9f2061bf23b5f6e4ed4e21b664e902e4a
MD5 42bc68df242e2223c6495ec8ff11d561
BLAKE2b-256 0c129b7e29846841fc602ca6d246bf179dc6e9844bf89a6e329e07e4b1cd403f

See more details on using hashes here.

File details

Details for the file databank-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: databank-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-1023-azure

File hashes

Hashes for databank-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 31c998d532b462e43e5d7b2866f89fc3128dcfe671489c978d3d377ef2082c08
MD5 2295a3ead98bf3825f7bec83bcbceaf0
BLAKE2b-256 ee02ded0581690c40c80deb5518558309ab3b6dc0bce8a49a674ca7bd877abaf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page