Skip to main content

Synchronize database schema to models — document-driven project.

Project description

dbconform

Your database schema has drifted. dbconform fixes it.

Over time, databases can diverge from your SQLAlchemy models — columns get added manually, constraints go missing, a hotfix gets applied directly to the DB and never captured in code. This is database drift, and it's a real-world, compounding problem.

SQLAlchemy's create_all() only creates new tables. Alembic works well for disciplined linear migrations, but it has no answer for drift: when your database diverges from your migration history, you're on your own.

dbconform inspects your live database, compares it against your SQLAlchemy (or SQLModel) models, and either tells you exactly what's wrong — or fixes it.

from dbconform import DbConform
from my_app.my_alchemy_schemas import Product, Cart # your own models

conform = DbConform(credentials={"url": "sqlite:///./mydb.sqlite"})
result = conform.apply_changes([Product, Cart])

print(f"Applied {len(result.steps)} change(s). Target database schema is conformant.")

That's it. No migration files, history table, CLI, or additional infrastructure.

✅   Supports both sync/async Python
✅   SQLite
✅   PostgreSQL
🏗️   MariaDB (in-scope for future development)


Why not Alembic?

Alembic is excellent when you start clean -and- stay disciplined. But that's just not always the situation we find ourselves in. So I wanted a tool that just fixes the problems, and lets me get on with my work:

Capability SQLAlchemy create_all Alembic Atlas dbconform
Create new tables
Alter existing columns
Correct schema drift (stateless) ⚠️
Works without migration history
Pure Python, pip install
SQLite constraint rebuild
Safe defaults (no accidental drops) ⚠️ ⚠️
In-process, programmatic

Atlas is a powerful schema platform — excellent for CI/CD pipelines and cloud drift monitoring. It's a Go CLI tool with its own infrastructure. dbconform is a Python library you call from application code.


When to use dbconform

  • You inherited a database and models, but the migrations have gone sideways.
  • You're running SQLite in development and Postgres in production — and they've structurally diverged
  • You want to programmatically enforce schema conformance at application startup (one of my personal favorites)
  • You don't want to manage migration history at all, with something like Alembic.
  • Someone ran a hotfix directly on the database and now you need to reconcile.

Installation

pip install dbconform

Optional extras:

pip install dbconform[postgres]        # PostgreSQL support (psycopg)
pip install dbconform[async]           # Async drivers (aiosqlite, asyncpg)
pip install dbconform[async,postgres]  # Both

Requirements: Python 3.11+


Quick Start

1. Define your models (SQLAlchemy or SQLModel)

from sqlalchemy import Column, Float, ForeignKey, Integer, String
from sqlalchemy.orm import DeclarativeBase

class Product(DeclarativeBase):
    __tablename__ = "product"
    id = Column(Integer, primary_key=True, autoincrement=True)
    name = Column(String(255), nullable=False)
    price = Column(Float, nullable=False)

class Cart(DeclarativeBase):
    __tablename__ = "cart"
    id = Column(Integer, primary_key=True, autoincrement=True)
    product_id = Column(Integer, ForeignKey("product.id"), nullable=False)
    quantity = Column(Integer, nullable=False)

2. Compare (dry run)

from dbconform import DbConform, ConformError

conform = DbConform(credentials={"url": "sqlite:///./mydb.sqlite"})
result = conform.compare([Product, Cart])

if isinstance(result, ConformError):
    print("Compare failed:", result.messages)
elif not result.steps:
    print("Database is up to date.")
else:
    for step in result.steps:
        print(step)
    print(result.sql())  # Full DDL script

3. Apply changes

result = conform.apply_changes([Product, Cart])

if isinstance(result, ConformError):
    print("Apply failed:", result.messages)
else:
    print(f"Applied {len(result.steps)} change(s). Schema is conformant.")

Using your own connection

from sqlalchemy import create_engine

engine = create_engine("sqlite:///./mydb.sqlite")
with engine.connect() as conn:
    conform = DbConform(connection=conn)
    result = conform.compare([Product, Cart])
engine.dispose()

PostgreSQL

conform = DbConform(
    credentials={"url": "postgresql+psycopg://user:pass@host/db"},
    target_schema="public"
)
result = conform.apply_changes([Product, Cart])

Async

import asyncio
from sqlalchemy.ext.asyncio import create_async_engine
from dbconform import AsyncDbConform, ConformError

async def main():
    engine = create_async_engine("sqlite+aiosqlite:///./mydb.sqlite")
    async with engine.connect() as conn:
        conform = AsyncDbConform(async_connection=conn)
        result = await conform.apply_changes([Product, Cart])
    await engine.dispose()

asyncio.run(main())

Safe by Default

dbconform will not drop tables or columns unless you explicitly opt in. The defaults are designed to be safe in production.

Flag Default What it controls
allow_drop_extra_tables False DROP TABLE for tables not in your models
allow_drop_extra_columns False DROP COLUMN for columns not in your models
allow_drop_extra_constraints True DROP CONSTRAINT / DROP INDEX for removed constraints
allow_shrink_column False ALTER COLUMN that reduces size (may truncate data)
allow_sqlite_table_rebuild True SQLite table rebuild for CHECK/UNIQUE/FK changes
report_extra_tables True Populate plan.extra_tables with unrecognized tables

apply_changes() additional flags:

Flag Default What it controls
commit_per_step False Commit after each step (partial progress on failure)
emit_log True JSON-line logs for applied steps to stdout
log_file None Path to append logs to a file

All flags are passed as keyword arguments:

result = conform.apply_changes(
    [Product, Cart],
    allow_drop_extra_columns=True,
    allow_shrink_column=True
)

SQLite and PostgreSQL

SQLite imposes strict limits on ALTER TABLE. Adding constraints (CHECK, UNIQUE, foreign keys) to an existing table requires rebuilding it entirely. dbconform handles this automatically — including data preservation, index recreation, and foreign key integrity — so you don't have to think about it.

PostgreSQL uses a different DDL dialect. dbconform abstracts both behind the same API.


Contributing

Issues and pull requests are welcome. For local development:

python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,async,postgres]"

Running tests (Docker or Podman required for PostgreSQL tests):

dbconform test run

To see the installed dbconform version:

dbconform version

See tests/TESTS_README.md for the full test organization.


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbconform-0.2.3.tar.gz (36.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dbconform-0.2.3-py3-none-any.whl (43.7 kB view details)

Uploaded Python 3

File details

Details for the file dbconform-0.2.3.tar.gz.

File metadata

  • Download URL: dbconform-0.2.3.tar.gz
  • Upload date:
  • Size: 36.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for dbconform-0.2.3.tar.gz
Algorithm Hash digest
SHA256 40c38d3837a705560104102e0cd2886a872b7675556897384b450c009fd460b2
MD5 36f863e9134ecc277ca7f50004155918
BLAKE2b-256 37d00be678e04ef37bce0b81687c047d9d5b10ffe651f6bb24a20f4d90f5fb9a

See more details on using hashes here.

File details

Details for the file dbconform-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: dbconform-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 43.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for dbconform-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 76ed7e7981b731957a9e011ce63a2f9b7f5088e74a7385ce7abc3d21694949f2
MD5 88f9cd6bf17bd4a9a25763598ca8e827
BLAKE2b-256 8c5112545535e9066de35055896dc9901ec183c3cf24cfeff6d1a41518f9f8fe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page