Unified Relational DB abstraction layer — clean adapters for 9+ databases
Project description
dd-db
Unified Relational DB abstraction layer for Python.
Connect to any relational database and get a pandas DataFrame back — with a consistent API for schema inspection, connection management, and query timing.
Supported Databases
| Adapter | Class | Extra |
|---|---|---|
| SQLite (stdlib) | SQLiteDB |
(none) |
| DuckDB | DuckDB |
duckdb |
| PostgreSQL | PostgresDB |
postgres |
| MySQL / MariaDB | MySQLDB |
mysql |
| Snowflake | SnowflakeDB |
snowflake |
| Google BigQuery | BigQueryDB |
bigquery |
| ClickHouse | ClickHouseDB |
clickhouse |
| SQL Server | MSSQLDB |
mssql |
| Oracle | OracleDB |
oracle |
Install
pip install dd-db # SQLite only (stdlib, zero extra deps)
pip install "dd-db[duckdb]" # + DuckDB
pip install "dd-db[postgres]" # + PostgreSQL
pip install "dd-db[all]" # all adapters
pip install "dd-db[dev]" # dev tools + DuckDB
Quick Start
from dd_db import SQLiteDB
with SQLiteDB(":memory:") as db:
db.run_query("CREATE TABLE t (id INT, name TEXT)")
db.run_query("INSERT INTO t VALUES (1, 'Alice')")
db.run_query("INSERT INTO t VALUES (2, 'Bob')")
# SELECT returns a pandas DataFrame
df = db.run_query("SELECT * FROM t")
print(df)
# id name
# 0 1 Alice
# 1 2 Bob
# Parameterised queries
row = db.run_query("SELECT * FROM t WHERE id = :id", params={"id": 1})
# Schema inspection
print(db.list_tables()) # ['t']
print(db.describe("t")) # DataFrame with column/type/pk columns
schema = db.get_schema("t") # TableSchema Pydantic model
print(schema.row_count) # 2
# Timed query
df, meta = db.timed_query("SELECT * FROM t")
print(meta.execution_time_ms) # e.g. 0.42
API Reference
Core methods (all adapters)
| Method | Returns | Description |
|---|---|---|
run_query(sql, params?) |
DataFrame |
Execute SQL; SELECT → rows, DML → {rows_affected} |
list_tables(schema?) |
list[str] |
Table names |
tables(schema?) |
DataFrame |
Table names as DataFrame |
get_schema(table, schema?) |
TableSchema |
Typed column metadata |
describe(table, schema?) |
DataFrame |
Human-readable column info |
test_connection() |
bool |
Health check |
connection_info() |
ConnectionInfo |
Loggable summary (no passwords) |
timed_query(sql, params?) |
(DataFrame, QueryResult) |
Query + execution metadata |
connect() |
None |
Open connection |
disconnect() |
None |
Close connection |
Context manager
with SQLiteDB("mydb.sqlite") as db:
...
# connection closed automatically
Pydantic models
from dd_db import TableSchema, ColumnInfo, QueryResult, ConnectionInfo
ColumnInfo—name,data_type,nullable,default,primary_keyTableSchema—table_name,schema_name,columns,row_count,full_nameQueryResult—sql,rows_returned,columns,execution_time_ms,success,errorConnectionInfo—adapter,host,port,database,username
Examples
DuckDB analytics
DuckDB uses $name for named parameters (not :name):
from dd_db import DuckDB
with DuckDB() as db:
df = db.run_query("""
SELECT region, SUM(amount) AS total
FROM sales
GROUP BY region
ORDER BY total DESC
""")
print(df)
# Parameterised — DuckDB uses $name style
row = db.run_query(
"SELECT * FROM sales WHERE region = $region AND amount > $min",
params={"region": "North", "min": 1000.0},
)
PostgreSQL
from dd_db import PostgresDB
with PostgresDB(host="localhost", database="mydb", user="me", password="pw") as db:
df = db.run_query("SELECT * FROM orders WHERE status = :status",
params={"status": "pending"})
Snowflake
from dd_db import SnowflakeDB
with SnowflakeDB(account="xy12345", user="me", password="pw",
database="ANALYTICS", schema="PUBLIC",
warehouse="COMPUTE_WH") as db:
df = db.run_query("SELECT TOP 100 * FROM my_table")
Parameter style by adapter
Each adapter translates params dict to its driver's native style:
| Adapter | SQL placeholder | Example |
|---|---|---|
| SQLiteDB | :name |
WHERE id = :id |
| PostgresDB | :name |
WHERE id = :id |
| DuckDB | $name |
WHERE id = $id |
| MySQLDB | %(name)s |
WHERE id = %(id)s |
| SnowflakeDB | %(name)s |
WHERE id = %(id)s |
| BigQueryDB | @name |
WHERE id = @id |
| MSSQLDB | ? (positional) |
WHERE id = ? |
| OracleDB | :name |
WHERE id = :id |
| ClickHouseDB | {name:type} |
WHERE id = {id:Int32} |
Cookbooks
See cookbook/ for runnable examples:
01_sqlite_basics.py— full walkthrough with SQLite (no install needed)02_duckdb_basics.py— analytics with DuckDB, including DataFrame joins
Running Tests
pip install -e ".[dev]"
python -m pytest
Tests use :memory: SQLite — no external server required.
Design
See docs/DESIGN.md for:
- Why synchronous-first?
- Why DataFrame return type everywhere?
- Relationship to vanna.ai
- How to add a new adapter
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dd_db-0.1.2.tar.gz.
File metadata
- Download URL: dd_db-0.1.2.tar.gz
- Upload date:
- Size: 24.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
337c7b9cc72587cc6b104ee31ffd14eb6c623d26dd9459bdfe8bb22de7703811
|
|
| MD5 |
f94ec411510cfb8b50f1117809870394
|
|
| BLAKE2b-256 |
736584036fa42c177eaa22eec3f35e836aabb99b25e499f711cbbd6469f2908f
|
File details
Details for the file dd_db-0.1.2-py3-none-any.whl.
File metadata
- Download URL: dd_db-0.1.2-py3-none-any.whl
- Upload date:
- Size: 26.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8d7821d6c237d18c6b548f4279c731a96a2067d302f21ff8158cbf09bdf74cbc
|
|
| MD5 |
b9ffc7314bebc195a8e5ffdbb0c5b37d
|
|
| BLAKE2b-256 |
2ae7a0da206fa211dd0795382154fad347edaad6cd02f0a68b0410b9ad81f267
|