Skip to main content

Generic async query helpers for SQLModel: filtering, eager loading, pagination

Project description

sqlmodel-object-helpers

PyPI version Python 3.14+ License: PolyForm Noncommercial

Generic async query helpers for SQLModel: filtering, eager loading, pagination, and mutations with security limits and full type safety.

Features

  • Async-First - All query and mutation functions are async, designed for AsyncSession
  • Flexible Filtering - LogicalFilter with recursive AND/OR/condition trees, plus flat dict filters with dot-notation for relationship traversal
  • 15 Operators - eq, ne, gt, lt, ge, le, in_, not_in, like, ilike, between, is, isnot, match, exists
  • Smart Eager Loading - Automatic selectinload for one-to-many and joinedload for many-to-one
  • Pagination - Page/per_page with total count and configurable max_per_page limit
  • Projections - Select specific columns across joins with dot-notation and SQL aliases
  • CRUD Mutations - add_object, update_object, delete_object plus bulk update_objects, delete_objects with flush-only semantics for composable transactions
  • Count & Exists - count_objects (single SELECT count(*)) and exists_object (SELECT EXISTS(...)) without loading data
  • Row Locking - for_update parameter on get_object for SELECT ... FOR UPDATE
  • Time Filtering - TimeFilter for created_at/updated_at range filtering with half-open interval semantics
  • Relationship Safety Check - check_for_related_records pre-deletion inspection of ONETOMANY dependencies
  • Security Limits - Configurable depth, list size, and pagination caps to prevent abuse
  • Type Safety - Full type annotations with PEP 695 generics and py.typed marker (PEP 561)
  • Dynamic Table Metadata - build_dynamic_meta derives TableMeta + list[ColumnMeta] from a SQLModel class by reading PostgreSQL pg_description with fallback to Column(comment=...). Supports per-role label/row_link overrides via || comment format, TTL-cached
  • Standalone Mode - configure() + import sqlmodel_object_helpers.standalone for auto-session usage without DI
  • Session Lifecycle Logging - Transparent session open/commit/rollback logging with hex session IDs and timing

Installation

pip install sqlmodel-object-helpers

Two usage modes:

# DI mode — caller provides session (FastAPI Depends, etc.)
import sqlmodel_object_helpers as soh

soh.get_object(session, ...)
soh.add_object(session, ...)

# Standalone mode — auto-creates session per call
import sqlmodel_object_helpers.standalone as soh_sa

soh_sa.get_object(User, pk={"id": 1})
soh_sa.add_object(User(name="Alice"))

Quick Start

from datetime import datetime

from sqlalchemy.ext.asyncio import AsyncSession
from sqlmodel import SQLModel, Field

import sqlmodel_object_helpers as soh


class User(SQLModel, table=True):
    id: int | None = Field(default=None, primary_key=True)
    name: str
    is_active: bool = True


async def example(session: AsyncSession):
    # Create
    user = await soh.add_object(session, User(name="Alice"))

    # Get one by PK
    found = await soh.get_object(session, User, pk={"id": user.id})

    # Get many with filtering
    active_users = await soh.get_objects(
        session, User,
        filters={"is_active": {soh.Operator.EQ: True}},
    )

    # Paginated
    page = await soh.get_objects(
        session, User,
        pagination=soh.Pagination(page=1, per_page=25),
    )
    # page.data -> list[User], page.pagination.total -> int

Standalone Mode

For projects that don't use FastAPI DI or need simple one-call-one-transaction semantics.

Important: The session factory must use expire_on_commit=False. After each standalone call the session commits and closes — with the default True, all attributes on returned objects would be expired and inaccessible (DetachedInstanceError).

from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker
import sqlmodel_object_helpers as soh
import sqlmodel_object_helpers.standalone as soh_sa

engine = create_async_engine("postgresql+asyncpg://...")
async_session_factory = async_sessionmaker(engine, expire_on_commit=False)

# 1. Configure the session factory once at startup
soh.configure(async_session_factory)

# 2. Use standalone wrappers — each call creates its own session and commits
user = await soh_sa.get_object(User, pk={"id": 1})
new_user = await soh_sa.add_object(User(name="Alice"))
await soh_sa.delete_object(User, pk={"id": 5})

# All 12 functions are available:
# Queries:  get_object, get_objects, count_objects, exists_object, get_projection
# Mutations: add_object, add_objects, update_object, update_objects,
#            delete_object, delete_objects, check_for_related_records

Each standalone call creates a session, executes the operation, commits on success, and rolls back on error. No session parameter needed.

auto_session - multi-operation transactions

When you need multiple operations in a single atomic transaction:

import sqlmodel_object_helpers as soh

async with soh.auto_session() as session:
    billing = await soh.add_object(session, Billing(...))
    payment = await soh.add_object(session, Payment(...))
    # commit happens automatically on exit
    # if any operation fails — ALL are rolled back

Session Management

DI mode — create_session_dependency()

For FastAPI projects with dependency injection:

import sqlmodel_object_helpers as soh
from typing import Annotated
from fastapi import Depends
from sqlalchemy.ext.asyncio import AsyncSession

get_session = soh.create_session_dependency(async_session_factory)
DbSession = Annotated[AsyncSession, Depends(get_session)]

@router.get("/users/{user_id}")
async def get_user(user_id: int, session: DbSession):
    return await soh.get_object(session, User, pk={"id": user_id})

One session per HTTP request. The dependency commits on success, rolls back on error.

Standalone mode — configure()

For projects without DI or for scripts/CLI:

import sqlmodel_object_helpers as soh

soh.configure(async_session_factory)
# Now standalone functions and auto_session() are available

Session Lifecycle Logging

All session operations are logged via logging.getLogger("sqlmodel_object_helpers"):

DEBUG  auto_session[1a2b3c4d] opened
DEBUG  auto_session[1a2b3c4d] committed (0.015s)
WARNING auto_session[1a2b3c4d] rollback (0.003s) — MutationError: ...
  • Hex session ID (1a2b3c4d) correlates all log lines for the same session
  • Prefix distinguishes mode: auto_session (standalone/auto_session) vs di_session (DI dependency)
  • Timing shows elapsed time from session open to commit/rollback
  • Level: DEBUG for normal flow, WARNING for rollbacks and commit failures

Enable with:

import logging
logging.getLogger("sqlmodel_object_helpers").setLevel(logging.DEBUG)

Configuration

Override security limits at application startup:

import sqlmodel_object_helpers as soh

soh.settings.max_per_page = 300
soh.settings.max_filter_depth = 50
Setting Default Description
max_filter_depth 100 Maximum recursion depth for AND/OR nesting
max_and_or_items 100 Maximum sub-filters in a single AND/OR list
max_load_depth 10 Maximum depth of eager-loading chains
max_in_list_size 1000 Maximum elements in an IN(...) list
max_per_page 500 Maximum value for per_page in pagination
session_factory None async_sessionmaker instance for standalone mode (set via soh.configure())

Query Operations

get_object

Fetch a single object by primary key or filters. Supports ORM mode (model instances with eager loading) and SQL mode (flat dicts from specific columns).

import sqlmodel_object_helpers as soh

# By primary key
user = await soh.get_object(session, User, pk={"id": 5})

# By LogicalFilter
user = await soh.get_object(
    session, User,
    filters=soh.LogicalFilter(condition={"name": {soh.Operator.EQ: "Alice"}}),
)

# With eager loading
user = await soh.get_object(
    session, User,
    pk={"id": 5},
    load_paths=["posts", "posts.comments"],
)

# Graceful mode (returns None instead of raising)
user = await soh.get_object(session, User, pk={"id": 999}, suspend_error=True)

# Row-level locking (SELECT ... FOR UPDATE)
user = await soh.get_object(session, User, pk={"id": 5}, for_update=True)

get_objects

Fetch multiple objects with filtering, pagination, sorting, and eager loading.

import sqlmodel_object_helpers as soh

# Simple filter
users = await soh.get_objects(session, User, filters={"is_active": {soh.Operator.EQ: True}})

# With pagination + sorting
result = await soh.get_objects(
    session, User,
    pagination=soh.Pagination(page=1, per_page=25),
    order_by=soh.OrderBy(sorts=[soh.OrderAsc(asc="name")]),
)
# result.data -> list[User]
# result.pagination -> PaginationR(page=1, per_page=25, total=100)

# OR logic between conditions
users = await soh.get_objects(
    session, User,
    filters={"is_active": {soh.Operator.EQ: True}, "role": {soh.Operator.EQ: "admin"}},
    logical_operator="OR",
)

# Relationship filters (dot-notation)
users = await soh.get_objects(
    session, Attempt,
    filters={"application.applicant.last_name": {soh.Operator.EQ: "Smith"}},
    load_paths=["application.applicant"],
)

# Time filtering
from datetime import datetime, timezone

recent = await soh.get_objects(
    session, User,
    time_filter=soh.TimeFilter(
        created_after=datetime(2026, 1, 1, tzinfo=timezone.utc),
        created_before=datetime(2026, 2, 1, tzinfo=timezone.utc),
    ),
)

count_objects

Return the count of records matching filters without loading any data.

import sqlmodel_object_helpers as soh

# Total count
total = await soh.count_objects(session, User)

# Filtered count
active = await soh.count_objects(session, User, filters={"is_active": {soh.Operator.EQ: True}})

# With time filter
recent = await soh.count_objects(
    session, User,
    time_filter=soh.TimeFilter(created_after=datetime(2026, 1, 1, tzinfo=timezone.utc)),
)

exists_object

Check whether at least one record matches the criteria. Uses EXISTS (SELECT ... LIMIT 1) -- the database stops at the first match.

import sqlmodel_object_helpers as soh

# By PK
found = await soh.exists_object(session, User, pk={"id": 5})

# By filter
has_admin = await soh.exists_object(session, User, filters={"role": {soh.Operator.EQ: "admin"}})

get_projection

Fetch specific columns from related tables. Returns flat dicts instead of ORM instances.

import sqlmodel_object_helpers as soh

rows = await soh.get_projection(
    session,
    Attempt,
    columns=[
        "id",
        ("application.applicant.last_name", "applicant_name"),
        ("schedule.unit.address", "unit_address"),
    ],
    outer_joins=["schedule"],
    limit=100,
)
# [{"id": 1, "applicant_name": "Smith", "unit_address": "123 Main St"}, ...]

Mutations

All mutation functions use session.flush() instead of session.commit() -- the caller manages transaction boundaries. This allows composing multiple mutations into a single atomic transaction:

import sqlmodel_object_helpers as soh

async with session.begin():
    await soh.add_object(session, billing)
    await soh.add_object(session, payment)
    # commit happens automatically when the block exits

add_object / add_objects

import sqlmodel_object_helpers as soh

# Single
user = await soh.add_object(session, User(name="Alice"))
# user.id is now populated (server-generated)

# Bulk
users = await soh.add_objects(session, [User(name="Alice"), User(name="Bob")])

update_object

import sqlmodel_object_helpers as soh

user = await soh.get_object(session, User, pk={"id": 1})
updated = await soh.update_object(session, user, {"name": "Alice Updated", "is_active": False})

Field names are validated against the model before touching the database. Raises MutationError if a key does not exist on the model.

delete_object

import sqlmodel_object_helpers as soh

# By instance
await soh.delete_object(session, User, instance=user)

# By PK
await soh.delete_object(session, User, pk={"id": 5})

update_objects / delete_objects

Bulk operations that issue a single SQL statement without loading objects:

import sqlmodel_object_helpers as soh

# Bulk update: UPDATE users SET is_active=False WHERE role='guest'
count = await soh.update_objects(
    session, User,
    data={"is_active": False},
    filters={"role": {soh.Operator.EQ: "guest"}},
)
# count -> number of rows updated

# Bulk delete: DELETE FROM users WHERE is_active=False
count = await soh.delete_objects(
    session, User,
    filters={"is_active": {soh.Operator.EQ: False}},
)
# count -> number of rows deleted

Filters are required for safety -- empty filters are rejected. Dot-notation (relationship) filters are not supported in bulk operations.

check_for_related_records

Pre-deletion check that inspects all ONETOMANY relationships:

import sqlmodel_object_helpers as soh

deps = await soh.check_for_related_records(session, Organization, pk={"id": 1})
if deps:
    print(deps)
    # ["Related record found in 'Unit (units_lkp)' (id=1)", ...]

Returns None if no related records exist, or a list of human-readable dependency descriptions.

Filtering

LogicalFilter (AND/OR/condition)

Recursive filter structure used by get_object. Exactly one of AND, OR, or condition must be set:

import sqlmodel_object_helpers as soh

# Simple condition
f = soh.LogicalFilter(condition={"status_id": {soh.Operator.EQ: 10}})

# OR
f = soh.LogicalFilter(OR=[
    soh.LogicalFilter(condition={"status_id": {soh.Operator.EQ: 10}}),
    soh.LogicalFilter(condition={"is_blocked": {soh.Operator.EQ: True}}),
])

# Nested AND + OR
f = soh.LogicalFilter(AND=[
    soh.LogicalFilter(condition={"is_active": {soh.Operator.EQ: True}}),
    soh.LogicalFilter(OR=[
        soh.LogicalFilter(condition={"role": {soh.Operator.EQ: "admin"}}),
        soh.LogicalFilter(condition={"role": {soh.Operator.EQ: "manager"}}),
    ]),
])

Flat dict filters

Used by get_objects. Nested dicts are auto-flattened via flatten_filters:

# Nested form (auto-flattened)
{"application": {"applicant": {"last_name": {soh.Operator.EQ: "test"}}}}

# Equivalent flat form (dot-notation)
{"application.applicant.last_name": {soh.Operator.EQ: "test"}}

Operators

Enum member SQL Example
Operator.EQ = {soh.Operator.EQ: 10}
Operator.NE != {soh.Operator.NE: 0}
Operator.GT > {soh.Operator.GT: 5}
Operator.LT < {soh.Operator.LT: 100}
Operator.GE >= {soh.Operator.GE: 1}
Operator.LE <= {soh.Operator.LE: 50}
Operator.IN IN (...) {soh.Operator.IN: [1, 2, 3]}
Operator.NOT_IN NOT IN (...) {soh.Operator.NOT_IN: [4, 5]}
Operator.LIKE LIKE {soh.Operator.LIKE: "%test%"}
Operator.ILIKE ILIKE {soh.Operator.ILIKE: "%test%"}
Operator.BETWEEN BETWEEN {soh.Operator.BETWEEN: [1, 10]}
Operator.IS IS {soh.Operator.IS: None}
Operator.IS_NOT IS NOT {soh.Operator.IS_NOT: None}
Operator.MATCH MATCH {soh.Operator.MATCH: "query"}
"exists" IS NOT NULL / .any() {"exists": True}

Operator is a StrEnum — each member equals its string value (Operator.EQ == "eq"), so string keys still work but enum members provide type safety and autocompletion.

Multiple operators can be combined on a single field: {"age": {soh.Operator.GE: 18, soh.Operator.LE: 65}}.

Typed Filter Models

Pydantic models for type-safe filter schemas in API endpoints:

import sqlmodel_object_helpers as soh
from pydantic import BaseModel

class UserFilter(BaseModel):
    age: soh.FilterInt | None = None       # eq, ne, gt, lt, ge, le, in_
    name: soh.FilterStr | None = None      # eq, ne, like, ilike
    is_active: soh.FilterBool | None = None  # eq, ne
    posts: soh.FilterExists | None = None  # exists: bool

Available: FilterInt, FilterStr, FilterDate, FilterDatetime, FilterNaiveDatetime, FilterTimedelta, FilterBool, FilterExists.

Range Filters

FilterDatetimeRange and FilterNaiveDatetimeRange parse a comma-separated date string into gt/lt operators:

import sqlmodel_object_helpers as soh

# Full range: "FROM,TO" → gt + lt
f = soh.FilterDatetimeRange.model_validate("2026-04-01,2026-05-06")
f.model_dump(exclude_none=True)
# {"gt": datetime(2026, 4, 1, tzinfo=UTC), "lt": datetime(2026, 5, 6, tzinfo=UTC)}
# SQL: WHERE field > '2026-04-01' AND field < '2026-05-06'

# Open end: "FROM," → gt only
f = soh.FilterDatetimeRange.model_validate("2026-04-01,")
# SQL: WHERE field > '2026-04-01'

# Open start: ",TO" → lt only
f = soh.FilterDatetimeRange.model_validate(",2026-05-06")
# SQL: WHERE field < '2026-05-06'

Each date part accepts ISO (2026-04-01, 2026-04-01T00:00:00Z) and display format (01.04.2026 00:00).

  • FilterDatetimeRange — produces UTC-aware datetimes
  • FilterNaiveDatetimeRange — produces naive (timezone-unaware) datetimes

Eager Loading

The library automatically selects the optimal loading strategy:

  • selectinload for one-to-many (uselist=True) -- avoids cartesian products
  • joinedload for many-to-one (uselist=False) -- single-query efficiency

Dot-notation chaining resolves each level independently:

# Each segment gets the optimal strategy
load_paths=["application.applicant"]
# application -> joinedload (many-to-one)
# applicant   -> joinedload (many-to-one)

load_paths=["comments"]
# comments -> selectinload (one-to-many)

Maximum chain depth is controlled by settings.max_load_depth (default: 10).

Pagination & Sorting

Pagination

import sqlmodel_object_helpers as soh

result = await soh.get_objects(
    session, User,
    pagination=soh.Pagination(page=2, per_page=25),
)
result.data         # list[User]
result.pagination   # PaginationR(page=2, per_page=25, total=150)

per_page is validated against settings.max_per_page (default: 500).

Sorting

import sqlmodel_object_helpers as soh

order = soh.OrderBy(sorts=[
    soh.OrderAsc(asc="last_name"),
    soh.OrderDesc(desc="created_at"),
])

result = await soh.get_objects(session, User, order_by=order)

Dynamic Table Metadata

build_dynamic_meta builds TableMeta + list[ColumnMeta] from a SQLModel class at runtime, reading PostgreSQL pg_description (TTL-cached) with fallback to Column(comment=...) model defaults.

import sqlmodel_object_helpers as soh

# Physical columns — label, type, lookup derived automatically
table_meta, columns = await soh.build_dynamic_meta(
    session,
    EmailMessage,
    name="email_messages",
    columns=[
        "id",                        # physical column — everything derived
        "email_type_id",             # FK on *_lkp → lookup_dict auto-derived
        "updated_at",
        soh.ColumnMeta(              # virtual column — fully specified
            json_path="last_editor.user_name",
            label="Редактор",
            type=soh.ColumnType.STRING,
        ),
    ],
    role_type="operator",            # per-role label/row_link overrides
)

# Returns (TableMeta, list[ColumnMeta]) ready for GetAllPagination
return soh.GetAllPagination(
    table=table_meta,
    columns=columns,
    data=items,
    pagination=pagination_r,
)

Type derivation (SA type → ColumnType)

Physical column types are mapped automatically. Unknown types fall back to string.

SQLAlchemy type ColumnType
Boolean boolean
DateTime datetime
Date date
Integer, BigInteger, SmallInteger integer
Numeric, Float float
String, Text, Enum, Interval, ARRAY, Uuid string
Any other type string (fallback)

Label precedence (per physical column)

  1. pg_description — DBA edit via COMMENT ON COLUMN (supports per-role || overrides)
  2. Column(comment=...) — Python-side default in the model
  3. ValueError — fail-loud if both are missing

Per-role overrides via || format

DBAs can set role-specific labels and row links in PostgreSQL comments:

COMMENT ON COLUMN emails.email_type_id IS 'Тип письма||operator=Категория||buh=Реквизит';
COMMENT ON TABLE  emails               IS 'Письма||row_link=/email/$id||row_link.operator=/op/email/$id';

TTL cache

pg_description queries are cached per (schema, table) with a configurable TTL (default 60s):

soh.configure_meta_cache_ttl(120)       # change TTL to 120 seconds
soh.invalidate_meta_cache()             # clear entire cache
soh.invalidate_meta_cache(schema="lead")  # clear all entries for a schema
soh.invalidate_meta_cache("lead", "emails")  # clear one entry

Standalone mode

import sqlmodel_object_helpers.standalone as soh_sa

table_meta, columns = await soh_sa.build_dynamic_meta(
    EmailMessage,
    name="email_messages",
    columns=["id", "email_type_id"],
)

Backward compatibility and migration

GetAllPagination is fully backward compatible — table and columns default to None, so existing endpoints continue to work without changes:

# Before (still works as-is)
return soh.GetAllPagination(data=items, pagination=pagination_r)

# After (meta added when ready)
return soh.GetAllPagination(
    table=table_meta,
    columns=columns,
    data=items,
    pagination=pagination_r,
)

Endpoints can be migrated one at a time. Three strategies per endpoint:

Strategy table/columns Use case
No meta Always None Endpoint not yet migrated, frontend uses hardcoded table
Always meta Sent on every request Simple, no frontend caching logic needed
Meta on first page Sent when page=1, None on pages 2+ Saves traffic, frontend caches meta from first response

API Reference

Session Management

  • soh.configure(factory) -- Register async_sessionmaker for standalone mode
  • soh.auto_session() -- Async context manager: creates session, commits on success, rolls back on error
  • soh.create_session_dependency(factory) -- Create async generator for FastAPI Depends

Standalone Wrappers

  • import sqlmodel_object_helpers.standalone as soh_sa -- All 12 query/mutation functions plus build_dynamic_meta without session parameter

Settings

  • soh.settings -- Module-level QueryHelperSettings instance (mutable at runtime)
  • soh.QueryHelperSettings -- Pydantic model for security/performance limits

Query Functions

  • soh.get_object(session, model, ...) -- Single object by PK or filters (supports for_update row locking)
  • soh.get_objects(session, model, ...) -- Multiple objects with filtering, pagination, sorting, time_filter
  • soh.count_objects(session, model, ...) -- Count matching records (SELECT count(*))
  • soh.exists_object(session, model, ...) -- Check existence (SELECT EXISTS(...))
  • soh.get_projection(session, model, columns, ...) -- Column projection across joins

Mutation Functions

  • soh.add_object(session, instance) -- Add single, flush + refresh
  • soh.add_objects(session, instances) -- Add multiple, flush + refresh each
  • soh.update_object(session, instance, data) -- Update fields from dict
  • soh.update_objects(session, model, data, filters) -- Bulk update via single UPDATE ... WHERE
  • soh.delete_object(session, model, *, instance, pk) -- Delete by reference or PK
  • soh.delete_objects(session, model, filters) -- Bulk delete via single DELETE ... WHERE
  • soh.check_for_related_records(session, model, pk) -- Pre-deletion dependency check

Dynamic Meta

  • soh.build_dynamic_meta(session, model, *, name, columns, header, row_link, role_type) -- Build TableMeta + list[ColumnMeta] from model + pg_description
  • soh.load_pg_comments(session, schema, table) -- Read (table_comment, {col: comment}) from pg_description (TTL-cached)
  • soh.configure_meta_cache_ttl(seconds) -- Override default pg_description cache TTL (default: 60s)
  • soh.invalidate_meta_cache(schema, table) -- Manually invalidate the pg_description cache (full, by-schema, or by-table)
  • soh.ColumnEntry -- Type alias: str | ColumnMeta (element of columns list in build_dynamic_meta)

Filter Builders

  • soh.build_filter(model, filters, ...) -- Build SQLAlchemy expression from LogicalFilter dict
  • soh.build_flat_filter(model, filters, ...) -- Build from flat dot-notation dict (aliased joins)
  • soh.flatten_filters(filters) -- Convert nested dicts to flat dot-notation

Operators

  • soh.Operator -- StrEnum of operator names
  • soh.SUPPORTED_OPERATORS -- Dict mapping operator names to SQLAlchemy lambdas
  • soh.SPECIAL_OPERATORS -- Frozenset of operators with extended handling ({"exists"})

Loaders

  • soh.build_load_chain(model, path, ...) -- Build loader option from string/list path
  • soh.build_load_options(model, attrs, *, suspend_error=False) -- Build single loader option chain

Exceptions

  • soh.QueryError -- Base exception (has status_code attribute for HTTP mapping)
  • soh.ObjectNotFoundError -- 404 Not Found
  • soh.InvalidFilterError -- 400 Bad Request
  • soh.InvalidLoadPathError -- 400 Bad Request
  • soh.DatabaseError -- 500 Internal Server Error
  • soh.MutationError -- 400 Bad Request

Filter Types

  • soh.FilterInt, soh.FilterStr, soh.FilterDate, soh.FilterDatetime, soh.FilterNaiveDatetime, soh.FilterTimedelta, soh.FilterBool, soh.FilterExists
  • soh.FilterDatetimeRange, soh.FilterNaiveDatetimeRange -- Range filters that parse "FROM,TO" strings into gt/lt operators
  • soh.OrderAsc, soh.OrderDesc, soh.OrderBy
  • soh.LogicalFilter
  • soh.TimeFilter -- created_after, created_before, updated_after, updated_before with half-open interval [after, before)

Datetime Types

  • soh.UTCDatetime -- Annotated[datetime, AfterValidator] that rejects naive datetimes and converts aware datetimes to UTC

Pagination Types

  • soh.Pagination -- Request model (page, per_page)
  • soh.PaginationR -- Response model (page, per_page, total)
  • soh.GetAllPagination[T] -- Generic wrapper (table: TableMeta | None, columns: list[ColumnMeta] | None, data: list[T], pagination: PaginationR | None)

Table Metadata Types

  • soh.TableMeta -- Table-level metadata (name, header, row_link)
  • soh.ColumnMeta -- Column metadata (json_path, label, type, lookup_dict, lookup_path, bool_labels)
  • soh.ColumnType -- StrEnum of column data types (string, integer, float, boolean, date, datetime)
  • soh.BoolLabels -- Display labels for boolean columns (true_label, false_label)

Lookup Types

  • soh.LookupMeta -- Lookup metadata (name used as cache key, matches ColumnMeta.lookup_dict)
  • soh.LookupResponse[T] -- Generic wrapper for lookup endpoints (meta: LookupMeta, data: list[T])

Projection Types

  • soh.ColumnSpec -- Type alias: str | tuple[str, str]

Development

pip install -e ".[dev]"
pytest tests/

Syncing with Remote

# Fetch commits and tags
git pull origin main --tags

# If local branch is behind remote
git reset --hard origin/main

Verify sync status

git log --oneline origin/main -5
git diff origin/main

CI/CD

The pipeline consists of two stages:

  1. auto_tag — on push to main, reads __version__ from __init__.py and automatically creates a tag (if it doesn't exist)

  2. mirror_to_github — on tag creation, mirrors the repository to GitHub (removing .gitlab-ci.yml)

Release flow

  1. Update __version__ in src/sqlmodel_object_helpers/__init__.py
  2. Push to main
  3. CI creates tag → mirrors to GitHub → publishes to PyPI

Versioning

This project follows Semantic Versioning (MAJOR.MINOR.PATCH):

  • PATCH — bug fixes, documentation, metadata
  • MINOR — new features (backwards compatible)
  • MAJOR — breaking changes

Changelog

0.0.6

  • build_dynamic_meta / load_pg_comments / configure_meta_cache_ttl / invalidate_meta_cache — dynamic UI metadata reader. Builds TableMeta and list[ColumnMeta] from a SQLModel class by reading PostgreSQL pg_description (TTL-cached, default 60s) with fallback to Column(comment=...) defaults from the model. Supports per-role label/row_link overrides via extended || comment format. For physical columns the label precedence is pg_description > Column(comment=...) model default. For virtual columns (paths with . traversing relationships) and full overrides — pass a fully-specified ColumnMeta instance directly in the columns list (used as-is, no derivation). Lookup lookup_dict/lookup_path are derived from FK on *_lkp tables by {schema}_{base} convention. Type derived from SA column type. DBAs can edit labels via COMMENT ON COLUMN/TABLE SQL — frontend reflects changes within cache TTL without redeploy. No sync block, no schema migrations introduced; the application's db.py is not touched.

0.0.5

  • FilterDatetimeRange / FilterNaiveDatetimeRange — range filters that parse comma-separated date strings ("2026-04-01,2026-05-06") into gt/lt operators for SQL filtering
  • ColumnMeta / TableMeta / ColumnType / BoolLabels — dynamic table metadata: backend describes columns, types, labels, lookups, and row navigation so the frontend renders any table without hardcoding
  • GetAllPagination — now includes optional table and columns fields for delivering table metadata alongside paginated data
  • LookupMeta / LookupResponse — standard {meta, data} wrapper for lookup (_lkp) endpoints, enabling unified frontend caching of dictionaries with meta.name as cache key

0.0.4

  • Initial public release

License

This project is licensed under the PolyForm Noncommercial License 1.0.0.

You may use this software for noncommercial purposes only.

See LICENSE for the full license text, or visit polyformproject.org.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlmodel_object_helpers-0.0.7.tar.gz (99.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sqlmodel_object_helpers-0.0.7-py3-none-any.whl (47.6 kB view details)

Uploaded Python 3

File details

Details for the file sqlmodel_object_helpers-0.0.7.tar.gz.

File metadata

  • Download URL: sqlmodel_object_helpers-0.0.7.tar.gz
  • Upload date:
  • Size: 99.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for sqlmodel_object_helpers-0.0.7.tar.gz
Algorithm Hash digest
SHA256 813a72940a4fbd23c603e819037e4d9e7cfe385414738cad38fa4cb0f3a2ee57
MD5 18f3f0cf0bcc0f2cd3f81931576a785a
BLAKE2b-256 c8ce2bc8d67a11e9b01835c74022945c7e7b19338190fb39198dd99574b8f0ea

See more details on using hashes here.

Provenance

The following attestation bundles were made for sqlmodel_object_helpers-0.0.7.tar.gz:

Publisher: publish.yml on itstandart/sqlmodel-object-helpers

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sqlmodel_object_helpers-0.0.7-py3-none-any.whl.

File metadata

File hashes

Hashes for sqlmodel_object_helpers-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 ad82227dd25294da954b0c33dfb8639eb7da77da806390706fc2debbc4e83da3
MD5 c7f533ba0876a34b56ef06843b5b8ac4
BLAKE2b-256 f3c6306ec4daa68cb04ed4dc7a00f2cb28cd618b3bd7a20128c0a3487a812dcc

See more details on using hashes here.

Provenance

The following attestation bundles were made for sqlmodel_object_helpers-0.0.7-py3-none-any.whl:

Publisher: publish.yml on itstandart/sqlmodel-object-helpers

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page