Generic async query helpers for SQLModel: filtering, eager loading, pagination
Project description
sqlmodel-object-helpers
Generic async query helpers for SQLModel: filtering, eager loading, pagination, and mutations with security limits and full type safety.
Features
- Async-First - All query and mutation functions are
async, designed forAsyncSession - Flexible Filtering -
LogicalFilterwith recursive AND/OR/condition trees, plus flat dict filters with dot-notation for relationship traversal - 15 Operators - eq, ne, gt, lt, ge, le, in_, not_in, like, ilike, between, is, isnot, match, exists
- Smart Eager Loading - Automatic
selectinloadfor one-to-many andjoinedloadfor many-to-one - Pagination - Page/per_page with total count and configurable max_per_page limit
- Projections - Select specific columns across joins with dot-notation and SQL aliases
- CRUD Mutations -
add_object,update_object,delete_objectplus bulkupdate_objects,delete_objectswith flush-only semantics for composable transactions - Count & Exists -
count_objects(singleSELECT count(*)) andexists_object(SELECT EXISTS(...)) without loading data - Row Locking -
for_updateparameter onget_objectforSELECT ... FOR UPDATE - Time Filtering -
TimeFilterforcreated_at/updated_atrange filtering with half-open interval semantics - Relationship Safety Check -
check_for_related_recordspre-deletion inspection of ONETOMANY dependencies - Security Limits - Configurable depth, list size, and pagination caps to prevent abuse
- Type Safety - Full type annotations with PEP 695 generics and
py.typedmarker (PEP 561) - Dynamic Table Metadata -
build_dynamic_metaderivesTableMeta+list[ColumnMeta]from a SQLModel class by reading PostgreSQLpg_descriptionwith fallback toColumn(comment=...). Supports per-role label/row_link overrides via||comment format, TTL-cached - Standalone Mode -
configure()+import sqlmodel_object_helpers.standalonefor auto-session usage without DI - Session Lifecycle Logging - Transparent session open/commit/rollback logging with hex session IDs and timing
Installation
pip install sqlmodel-object-helpers
Two usage modes:
# DI mode — caller provides session (FastAPI Depends, etc.)
import sqlmodel_object_helpers as soh
soh.get_object(session, ...)
soh.add_object(session, ...)
# Standalone mode — auto-creates session per call
import sqlmodel_object_helpers.standalone as soh_sa
soh_sa.get_object(User, pk={"id": 1})
soh_sa.add_object(User(name="Alice"))
Quick Start
from datetime import datetime
from sqlalchemy.ext.asyncio import AsyncSession
from sqlmodel import SQLModel, Field
import sqlmodel_object_helpers as soh
class User(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
is_active: bool = True
async def example(session: AsyncSession):
# Create
user = await soh.add_object(session, User(name="Alice"))
# Get one by PK
found = await soh.get_object(session, User, pk={"id": user.id})
# Get many with filtering
active_users = await soh.get_objects(
session, User,
filters={"is_active": {soh.Operator.EQ: True}},
)
# Paginated
page = await soh.get_objects(
session, User,
pagination=soh.Pagination(page=1, per_page=25),
)
# page.data -> list[User], page.pagination.total -> int
Standalone Mode
For projects that don't use FastAPI DI or need simple one-call-one-transaction semantics.
Important: The session factory must use
expire_on_commit=False. After each standalone call the session commits and closes — with the defaultTrue, all attributes on returned objects would be expired and inaccessible (DetachedInstanceError).
from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker
import sqlmodel_object_helpers as soh
import sqlmodel_object_helpers.standalone as soh_sa
engine = create_async_engine("postgresql+asyncpg://...")
async_session_factory = async_sessionmaker(engine, expire_on_commit=False)
# 1. Configure the session factory once at startup
soh.configure(async_session_factory)
# 2. Use standalone wrappers — each call creates its own session and commits
user = await soh_sa.get_object(User, pk={"id": 1})
new_user = await soh_sa.add_object(User(name="Alice"))
await soh_sa.delete_object(User, pk={"id": 5})
# All 12 functions are available:
# Queries: get_object, get_objects, count_objects, exists_object, get_projection
# Mutations: add_object, add_objects, update_object, update_objects,
# delete_object, delete_objects, check_for_related_records
Each standalone call creates a session, executes the operation, commits on success, and rolls back on error. No session parameter needed.
auto_session - multi-operation transactions
When you need multiple operations in a single atomic transaction:
import sqlmodel_object_helpers as soh
async with soh.auto_session() as session:
billing = await soh.add_object(session, Billing(...))
payment = await soh.add_object(session, Payment(...))
# commit happens automatically on exit
# if any operation fails — ALL are rolled back
Session Management
DI mode — create_session_dependency()
For FastAPI projects with dependency injection:
import sqlmodel_object_helpers as soh
from typing import Annotated
from fastapi import Depends
from sqlalchemy.ext.asyncio import AsyncSession
get_session = soh.create_session_dependency(async_session_factory)
DbSession = Annotated[AsyncSession, Depends(get_session)]
@router.get("/users/{user_id}")
async def get_user(user_id: int, session: DbSession):
return await soh.get_object(session, User, pk={"id": user_id})
One session per HTTP request. The dependency commits on success, rolls back on error.
Standalone mode — configure()
For projects without DI or for scripts/CLI:
import sqlmodel_object_helpers as soh
soh.configure(async_session_factory)
# Now standalone functions and auto_session() are available
Session Lifecycle Logging
All session operations are logged via logging.getLogger("sqlmodel_object_helpers"):
DEBUG auto_session[1a2b3c4d] opened
DEBUG auto_session[1a2b3c4d] committed (0.015s)
WARNING auto_session[1a2b3c4d] rollback (0.003s) — MutationError: ...
- Hex session ID (
1a2b3c4d) correlates all log lines for the same session - Prefix distinguishes mode:
auto_session(standalone/auto_session) vsdi_session(DI dependency) - Timing shows elapsed time from session open to commit/rollback
- Level:
DEBUGfor normal flow,WARNINGfor rollbacks and commit failures
Enable with:
import logging
logging.getLogger("sqlmodel_object_helpers").setLevel(logging.DEBUG)
Configuration
Override security limits at application startup:
import sqlmodel_object_helpers as soh
soh.settings.max_per_page = 300
soh.settings.max_filter_depth = 50
| Setting | Default | Description |
|---|---|---|
max_filter_depth |
100 |
Maximum recursion depth for AND/OR nesting |
max_and_or_items |
100 |
Maximum sub-filters in a single AND/OR list |
max_load_depth |
10 |
Maximum depth of eager-loading chains |
max_in_list_size |
1000 |
Maximum elements in an IN(...) list |
max_per_page |
500 |
Maximum value for per_page in pagination |
session_factory |
None |
async_sessionmaker instance for standalone mode (set via soh.configure()) |
Query Operations
get_object
Fetch a single object by primary key or filters. Supports ORM mode (model instances with eager loading) and SQL mode (flat dicts from specific columns).
import sqlmodel_object_helpers as soh
# By primary key
user = await soh.get_object(session, User, pk={"id": 5})
# By LogicalFilter
user = await soh.get_object(
session, User,
filters=soh.LogicalFilter(condition={"name": {soh.Operator.EQ: "Alice"}}),
)
# With eager loading
user = await soh.get_object(
session, User,
pk={"id": 5},
load_paths=["posts", "posts.comments"],
)
# Graceful mode (returns None instead of raising)
user = await soh.get_object(session, User, pk={"id": 999}, suspend_error=True)
# Row-level locking (SELECT ... FOR UPDATE)
user = await soh.get_object(session, User, pk={"id": 5}, for_update=True)
get_objects
Fetch multiple objects with filtering, pagination, sorting, and eager loading.
import sqlmodel_object_helpers as soh
# Simple filter
users = await soh.get_objects(session, User, filters={"is_active": {soh.Operator.EQ: True}})
# With pagination + sorting
result = await soh.get_objects(
session, User,
pagination=soh.Pagination(page=1, per_page=25),
order_by=soh.OrderBy(sorts=[soh.OrderAsc(asc="name")]),
)
# result.data -> list[User]
# result.pagination -> PaginationR(page=1, per_page=25, total=100)
# OR logic between conditions
users = await soh.get_objects(
session, User,
filters={"is_active": {soh.Operator.EQ: True}, "role": {soh.Operator.EQ: "admin"}},
logical_operator="OR",
)
# Relationship filters (dot-notation)
users = await soh.get_objects(
session, Attempt,
filters={"application.applicant.last_name": {soh.Operator.EQ: "Smith"}},
load_paths=["application.applicant"],
)
# Time filtering
from datetime import datetime, timezone
recent = await soh.get_objects(
session, User,
time_filter=soh.TimeFilter(
created_after=datetime(2026, 1, 1, tzinfo=timezone.utc),
created_before=datetime(2026, 2, 1, tzinfo=timezone.utc),
),
)
count_objects
Return the count of records matching filters without loading any data.
import sqlmodel_object_helpers as soh
# Total count
total = await soh.count_objects(session, User)
# Filtered count
active = await soh.count_objects(session, User, filters={"is_active": {soh.Operator.EQ: True}})
# With time filter
recent = await soh.count_objects(
session, User,
time_filter=soh.TimeFilter(created_after=datetime(2026, 1, 1, tzinfo=timezone.utc)),
)
exists_object
Check whether at least one record matches the criteria. Uses EXISTS (SELECT ... LIMIT 1) -- the database stops at the first match.
import sqlmodel_object_helpers as soh
# By PK
found = await soh.exists_object(session, User, pk={"id": 5})
# By filter
has_admin = await soh.exists_object(session, User, filters={"role": {soh.Operator.EQ: "admin"}})
get_projection
Fetch specific columns from related tables. Returns flat dicts instead of ORM instances.
import sqlmodel_object_helpers as soh
rows = await soh.get_projection(
session,
Attempt,
columns=[
"id",
("application.applicant.last_name", "applicant_name"),
("schedule.unit.address", "unit_address"),
],
outer_joins=["schedule"],
limit=100,
)
# [{"id": 1, "applicant_name": "Smith", "unit_address": "123 Main St"}, ...]
Mutations
All mutation functions use session.flush() instead of session.commit() -- the caller manages transaction boundaries. This allows composing multiple mutations into a single atomic transaction:
import sqlmodel_object_helpers as soh
async with session.begin():
await soh.add_object(session, billing)
await soh.add_object(session, payment)
# commit happens automatically when the block exits
add_object / add_objects
import sqlmodel_object_helpers as soh
# Single
user = await soh.add_object(session, User(name="Alice"))
# user.id is now populated (server-generated)
# Bulk
users = await soh.add_objects(session, [User(name="Alice"), User(name="Bob")])
update_object
import sqlmodel_object_helpers as soh
user = await soh.get_object(session, User, pk={"id": 1})
updated = await soh.update_object(session, user, {"name": "Alice Updated", "is_active": False})
Field names are validated against the model before touching the database. Raises MutationError if a key does not exist on the model.
delete_object
import sqlmodel_object_helpers as soh
# By instance
await soh.delete_object(session, User, instance=user)
# By PK
await soh.delete_object(session, User, pk={"id": 5})
update_objects / delete_objects
Bulk operations that issue a single SQL statement without loading objects:
import sqlmodel_object_helpers as soh
# Bulk update: UPDATE users SET is_active=False WHERE role='guest'
count = await soh.update_objects(
session, User,
data={"is_active": False},
filters={"role": {soh.Operator.EQ: "guest"}},
)
# count -> number of rows updated
# Bulk delete: DELETE FROM users WHERE is_active=False
count = await soh.delete_objects(
session, User,
filters={"is_active": {soh.Operator.EQ: False}},
)
# count -> number of rows deleted
Filters are required for safety -- empty filters are rejected. Dot-notation (relationship) filters are not supported in bulk operations.
check_for_related_records
Pre-deletion check that inspects all ONETOMANY relationships:
import sqlmodel_object_helpers as soh
deps = await soh.check_for_related_records(session, Organization, pk={"id": 1})
if deps:
print(deps)
# ["Related record found in 'Unit (units_lkp)' (id=1)", ...]
Returns None if no related records exist, or a list of human-readable dependency descriptions.
Filtering
LogicalFilter (AND/OR/condition)
Recursive filter structure used by get_object. Exactly one of AND, OR, or condition must be set:
import sqlmodel_object_helpers as soh
# Simple condition
f = soh.LogicalFilter(condition={"status_id": {soh.Operator.EQ: 10}})
# OR
f = soh.LogicalFilter(OR=[
soh.LogicalFilter(condition={"status_id": {soh.Operator.EQ: 10}}),
soh.LogicalFilter(condition={"is_blocked": {soh.Operator.EQ: True}}),
])
# Nested AND + OR
f = soh.LogicalFilter(AND=[
soh.LogicalFilter(condition={"is_active": {soh.Operator.EQ: True}}),
soh.LogicalFilter(OR=[
soh.LogicalFilter(condition={"role": {soh.Operator.EQ: "admin"}}),
soh.LogicalFilter(condition={"role": {soh.Operator.EQ: "manager"}}),
]),
])
Flat dict filters
Used by get_objects. Nested dicts are auto-flattened via flatten_filters:
# Nested form (auto-flattened)
{"application": {"applicant": {"last_name": {soh.Operator.EQ: "test"}}}}
# Equivalent flat form (dot-notation)
{"application.applicant.last_name": {soh.Operator.EQ: "test"}}
Operators
| Enum member | SQL | Example |
|---|---|---|
Operator.EQ |
= |
{soh.Operator.EQ: 10} |
Operator.NE |
!= |
{soh.Operator.NE: 0} |
Operator.GT |
> |
{soh.Operator.GT: 5} |
Operator.LT |
< |
{soh.Operator.LT: 100} |
Operator.GE |
>= |
{soh.Operator.GE: 1} |
Operator.LE |
<= |
{soh.Operator.LE: 50} |
Operator.IN |
IN (...) |
{soh.Operator.IN: [1, 2, 3]} |
Operator.NOT_IN |
NOT IN (...) |
{soh.Operator.NOT_IN: [4, 5]} |
Operator.LIKE |
LIKE |
{soh.Operator.LIKE: "%test%"} |
Operator.ILIKE |
ILIKE |
{soh.Operator.ILIKE: "%test%"} |
Operator.BETWEEN |
BETWEEN |
{soh.Operator.BETWEEN: [1, 10]} |
Operator.IS |
IS |
{soh.Operator.IS: None} |
Operator.IS_NOT |
IS NOT |
{soh.Operator.IS_NOT: None} |
Operator.MATCH |
MATCH |
{soh.Operator.MATCH: "query"} |
"exists" |
IS NOT NULL / .any() |
{"exists": True} |
Operator is a StrEnum — each member equals its string value (Operator.EQ == "eq"), so string keys still work but enum members provide type safety and autocompletion.
Multiple operators can be combined on a single field: {"age": {soh.Operator.GE: 18, soh.Operator.LE: 65}}.
Typed Filter Models
Pydantic models for type-safe filter schemas in API endpoints:
import sqlmodel_object_helpers as soh
from pydantic import BaseModel
class UserFilter(BaseModel):
age: soh.FilterInt | None = None # eq, ne, gt, lt, ge, le, in_
name: soh.FilterStr | None = None # eq, ne, like, ilike
is_active: soh.FilterBool | None = None # eq, ne
posts: soh.FilterExists | None = None # exists: bool
Available: FilterInt, FilterStr, FilterDate, FilterDatetime, FilterNaiveDatetime, FilterTimedelta, FilterBool, FilterExists.
Range Filters
FilterDatetimeRange and FilterNaiveDatetimeRange parse a comma-separated date string into gt/lt operators:
import sqlmodel_object_helpers as soh
# Full range: "FROM,TO" → gt + lt
f = soh.FilterDatetimeRange.model_validate("2026-04-01,2026-05-06")
f.model_dump(exclude_none=True)
# {"gt": datetime(2026, 4, 1, tzinfo=UTC), "lt": datetime(2026, 5, 6, tzinfo=UTC)}
# SQL: WHERE field > '2026-04-01' AND field < '2026-05-06'
# Open end: "FROM," → gt only
f = soh.FilterDatetimeRange.model_validate("2026-04-01,")
# SQL: WHERE field > '2026-04-01'
# Open start: ",TO" → lt only
f = soh.FilterDatetimeRange.model_validate(",2026-05-06")
# SQL: WHERE field < '2026-05-06'
Each date part accepts ISO (2026-04-01, 2026-04-01T00:00:00Z) and display format (01.04.2026 00:00).
FilterDatetimeRange— produces UTC-aware datetimesFilterNaiveDatetimeRange— produces naive (timezone-unaware) datetimes
Eager Loading
The library automatically selects the optimal loading strategy:
selectinloadfor one-to-many (uselist=True) -- avoids cartesian productsjoinedloadfor many-to-one (uselist=False) -- single-query efficiency
Dot-notation chaining resolves each level independently:
# Each segment gets the optimal strategy
load_paths=["application.applicant"]
# application -> joinedload (many-to-one)
# applicant -> joinedload (many-to-one)
load_paths=["comments"]
# comments -> selectinload (one-to-many)
Maximum chain depth is controlled by settings.max_load_depth (default: 10).
Pagination & Sorting
Pagination
import sqlmodel_object_helpers as soh
result = await soh.get_objects(
session, User,
pagination=soh.Pagination(page=2, per_page=25),
)
result.data # list[User]
result.pagination # PaginationR(page=2, per_page=25, total=150)
per_page is validated against settings.max_per_page (default: 500).
Sorting
import sqlmodel_object_helpers as soh
order = soh.OrderBy(sorts=[
soh.OrderAsc(asc="last_name"),
soh.OrderDesc(desc="created_at"),
])
result = await soh.get_objects(session, User, order_by=order)
Dynamic Table Metadata
build_dynamic_meta builds TableMeta + list[ColumnMeta] from a SQLModel class at runtime, reading PostgreSQL pg_description (TTL-cached) with fallback to Column(comment=...) model defaults.
import sqlmodel_object_helpers as soh
# Physical columns — label, type, lookup derived automatically
table_meta, columns = await soh.build_dynamic_meta(
session,
EmailMessage,
name="email_messages",
columns=[
"id", # physical column — everything derived
"email_type_id", # FK on *_lkp → lookup_dict auto-derived
"updated_at",
soh.ColumnMeta( # virtual column — fully specified
json_path="last_editor.user_name",
label="Редактор",
type=soh.ColumnType.STRING,
),
],
role_type="operator", # per-role label/row_link overrides
)
# Returns (TableMeta, list[ColumnMeta]) ready for GetAllPagination
return soh.GetAllPagination(
table=table_meta,
columns=columns,
data=items,
pagination=pagination_r,
)
Type derivation (SA type → ColumnType)
Physical column types are mapped automatically. Unknown types fall back to string.
| SQLAlchemy type | ColumnType |
|---|---|
Boolean |
boolean |
DateTime |
datetime |
Date |
date |
Integer, BigInteger, SmallInteger |
integer |
Numeric, Float |
float |
String, Text, Enum, Interval, ARRAY, Uuid |
string |
| Any other type | string (fallback) |
Label precedence (per physical column)
pg_description— DBA edit viaCOMMENT ON COLUMN(supports per-role||overrides)Column(comment=...)— Python-side default in the modelValueError— fail-loud if both are missing
Per-role overrides via || format
DBAs can set role-specific labels and row links in PostgreSQL comments:
COMMENT ON COLUMN emails.email_type_id IS 'Тип письма||operator=Категория||buh=Реквизит';
COMMENT ON TABLE emails IS 'Письма||row_link=/email/$id||row_link.operator=/op/email/$id';
TTL cache
pg_description queries are cached per (schema, table) with a configurable TTL (default 60s):
soh.configure_meta_cache_ttl(120) # change TTL to 120 seconds
soh.invalidate_meta_cache() # clear entire cache
soh.invalidate_meta_cache(schema="lead") # clear all entries for a schema
soh.invalidate_meta_cache("lead", "emails") # clear one entry
Standalone mode
import sqlmodel_object_helpers.standalone as soh_sa
table_meta, columns = await soh_sa.build_dynamic_meta(
EmailMessage,
name="email_messages",
columns=["id", "email_type_id"],
)
Backward compatibility and migration
GetAllPagination is fully backward compatible — table and columns default to None, so existing endpoints continue to work without changes:
# Before (still works as-is)
return soh.GetAllPagination(data=items, pagination=pagination_r)
# After (meta added when ready)
return soh.GetAllPagination(
table=table_meta,
columns=columns,
data=items,
pagination=pagination_r,
)
Endpoints can be migrated one at a time. Three strategies per endpoint:
| Strategy | table/columns |
Use case |
|---|---|---|
| No meta | Always None |
Endpoint not yet migrated, frontend uses hardcoded table |
| Always meta | Sent on every request | Simple, no frontend caching logic needed |
| Meta on first page | Sent when page=1, None on pages 2+ |
Saves traffic, frontend caches meta from first response |
API Reference
Session Management
soh.configure(factory)-- Registerasync_sessionmakerfor standalone modesoh.auto_session()-- Async context manager: creates session, commits on success, rolls back on errorsoh.create_session_dependency(factory)-- Create async generator for FastAPIDepends
Standalone Wrappers
import sqlmodel_object_helpers.standalone as soh_sa-- All 12 query/mutation functions plusbuild_dynamic_metawithoutsessionparameter
Settings
soh.settings-- Module-levelQueryHelperSettingsinstance (mutable at runtime)soh.QueryHelperSettings-- Pydantic model for security/performance limits
Query Functions
soh.get_object(session, model, ...)-- Single object by PK or filters (supportsfor_updaterow locking)soh.get_objects(session, model, ...)-- Multiple objects with filtering, pagination, sorting,time_filtersoh.count_objects(session, model, ...)-- Count matching records (SELECT count(*))soh.exists_object(session, model, ...)-- Check existence (SELECT EXISTS(...))soh.get_projection(session, model, columns, ...)-- Column projection across joins
Mutation Functions
soh.add_object(session, instance)-- Add single, flush + refreshsoh.add_objects(session, instances)-- Add multiple, flush + refresh eachsoh.update_object(session, instance, data)-- Update fields from dictsoh.update_objects(session, model, data, filters)-- Bulk update via singleUPDATE ... WHEREsoh.delete_object(session, model, *, instance, pk)-- Delete by reference or PKsoh.delete_objects(session, model, filters)-- Bulk delete via singleDELETE ... WHEREsoh.check_for_related_records(session, model, pk)-- Pre-deletion dependency check
Dynamic Meta
soh.build_dynamic_meta(session, model, *, name, columns, header, row_link, role_type)-- BuildTableMeta+list[ColumnMeta]from model +pg_descriptionsoh.load_pg_comments(session, schema, table)-- Read(table_comment, {col: comment})frompg_description(TTL-cached)soh.configure_meta_cache_ttl(seconds)-- Override defaultpg_descriptioncache TTL (default: 60s)soh.invalidate_meta_cache(schema, table)-- Manually invalidate thepg_descriptioncache (full, by-schema, or by-table)soh.ColumnEntry-- Type alias:str | ColumnMeta(element ofcolumnslist inbuild_dynamic_meta)
Filter Builders
soh.build_filter(model, filters, ...)-- Build SQLAlchemy expression from LogicalFilter dictsoh.build_flat_filter(model, filters, ...)-- Build from flat dot-notation dict (aliased joins)soh.flatten_filters(filters)-- Convert nested dicts to flat dot-notation
Operators
soh.Operator-- StrEnum of operator namessoh.SUPPORTED_OPERATORS-- Dict mapping operator names to SQLAlchemy lambdassoh.SPECIAL_OPERATORS-- Frozenset of operators with extended handling ({"exists"})
Loaders
soh.build_load_chain(model, path, ...)-- Build loader option from string/list pathsoh.build_load_options(model, attrs, *, suspend_error=False)-- Build single loader option chain
Exceptions
soh.QueryError-- Base exception (hasstatus_codeattribute for HTTP mapping)soh.ObjectNotFoundError-- 404 Not Foundsoh.InvalidFilterError-- 400 Bad Requestsoh.InvalidLoadPathError-- 400 Bad Requestsoh.DatabaseError-- 500 Internal Server Errorsoh.MutationError-- 400 Bad Request
Filter Types
soh.FilterInt,soh.FilterStr,soh.FilterDate,soh.FilterDatetime,soh.FilterNaiveDatetime,soh.FilterTimedelta,soh.FilterBool,soh.FilterExistssoh.FilterDatetimeRange,soh.FilterNaiveDatetimeRange-- Range filters that parse"FROM,TO"strings intogt/ltoperatorssoh.OrderAsc,soh.OrderDesc,soh.OrderBysoh.LogicalFiltersoh.TimeFilter--created_after,created_before,updated_after,updated_beforewith half-open interval[after, before)
Datetime Types
soh.UTCDatetime--Annotated[datetime, AfterValidator]that rejects naive datetimes and converts aware datetimes to UTC
Pagination Types
soh.Pagination-- Request model (page, per_page)soh.PaginationR-- Response model (page, per_page, total)soh.GetAllPagination[T]-- Generic wrapper (table: TableMeta | None, columns: list[ColumnMeta] | None, data: list[T], pagination: PaginationR | None)
Table Metadata Types
soh.TableMeta-- Table-level metadata (name, header, row_link)soh.ColumnMeta-- Column metadata (json_path, label, type, lookup_dict, lookup_path, bool_labels)soh.ColumnType-- StrEnum of column data types (string, integer, float, boolean, date, datetime)soh.BoolLabels-- Display labels for boolean columns (true_label, false_label)
Lookup Types
soh.LookupMeta-- Lookup metadata (name used as cache key, matchesColumnMeta.lookup_dict)soh.LookupResponse[T]-- Generic wrapper for lookup endpoints (meta: LookupMeta, data: list[T])
Projection Types
soh.ColumnSpec-- Type alias:str | tuple[str, str]
Development
pip install -e ".[dev]"
pytest tests/
Syncing with Remote
# Fetch commits and tags
git pull origin main --tags
# If local branch is behind remote
git reset --hard origin/main
Verify sync status
git log --oneline origin/main -5
git diff origin/main
CI/CD
The pipeline consists of two stages:
-
auto_tag — on push to
main, reads__version__from__init__.pyand automatically creates a tag (if it doesn't exist) -
mirror_to_github — on tag creation, mirrors the repository to GitHub (removing
.gitlab-ci.yml)
Release flow
- Update
__version__insrc/sqlmodel_object_helpers/__init__.py - Push to main
- CI creates tag → mirrors to GitHub → publishes to PyPI
Versioning
This project follows Semantic Versioning (MAJOR.MINOR.PATCH):
- PATCH — bug fixes, documentation, metadata
- MINOR — new features (backwards compatible)
- MAJOR — breaking changes
Changelog
0.0.6
- build_dynamic_meta / load_pg_comments / configure_meta_cache_ttl / invalidate_meta_cache — dynamic UI metadata reader. Builds
TableMetaandlist[ColumnMeta]from a SQLModel class by reading PostgreSQLpg_description(TTL-cached, default 60s) with fallback toColumn(comment=...)defaults from the model. Supports per-role label/row_link overrides via extended||comment format. For physical columns the label precedence ispg_description>Column(comment=...)model default. For virtual columns (paths with.traversing relationships) and full overrides — pass a fully-specifiedColumnMetainstance directly in thecolumnslist (used as-is, no derivation). Lookuplookup_dict/lookup_pathare derived from FK on*_lkptables by{schema}_{base}convention. Type derived from SA column type. DBAs can edit labels viaCOMMENT ON COLUMN/TABLESQL — frontend reflects changes within cache TTL without redeploy. No sync block, no schema migrations introduced; the application'sdb.pyis not touched.
0.0.5
- FilterDatetimeRange / FilterNaiveDatetimeRange — range filters that parse comma-separated date strings (
"2026-04-01,2026-05-06") intogt/ltoperators for SQL filtering - ColumnMeta / TableMeta / ColumnType / BoolLabels — dynamic table metadata: backend describes columns, types, labels, lookups, and row navigation so the frontend renders any table without hardcoding
- GetAllPagination — now includes optional
tableandcolumnsfields for delivering table metadata alongside paginated data - LookupMeta / LookupResponse — standard
{meta, data}wrapper for lookup (_lkp) endpoints, enabling unified frontend caching of dictionaries withmeta.nameas cache key
0.0.4
- Initial public release
License
This project is licensed under the PolyForm Noncommercial License 1.0.0.
You may use this software for noncommercial purposes only.
See LICENSE for the full license text, or visit polyformproject.org.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sqlmodel_object_helpers-0.0.6.tar.gz.
File metadata
- Download URL: sqlmodel_object_helpers-0.0.6.tar.gz
- Upload date:
- Size: 97.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e55dbcf54a95aff918f2c22b4fe4afe39ebea7d59ab4039d38b029fab3c8d681
|
|
| MD5 |
e55c00f00a5816236987e46255c958b7
|
|
| BLAKE2b-256 |
f16bec951253165270347f3e7b8bd7f3913515ac64423e07f52ffa4ae2ef6379
|
Provenance
The following attestation bundles were made for sqlmodel_object_helpers-0.0.6.tar.gz:
Publisher:
publish.yml on itstandart/sqlmodel-object-helpers
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sqlmodel_object_helpers-0.0.6.tar.gz -
Subject digest:
e55dbcf54a95aff918f2c22b4fe4afe39ebea7d59ab4039d38b029fab3c8d681 - Sigstore transparency entry: 1262408151
- Sigstore integration time:
-
Permalink:
itstandart/sqlmodel-object-helpers@0f4b95e2e69352777bc6653f1faeb9d9352800fa -
Branch / Tag:
refs/tags/v0.0.6 - Owner: https://github.com/itstandart
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@0f4b95e2e69352777bc6653f1faeb9d9352800fa -
Trigger Event:
push
-
Statement type:
File details
Details for the file sqlmodel_object_helpers-0.0.6-py3-none-any.whl.
File metadata
- Download URL: sqlmodel_object_helpers-0.0.6-py3-none-any.whl
- Upload date:
- Size: 47.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2cf0c0f48df3e895988cc012c6e1166be14bd38fa74c9f14bcb269ae2c4aba08
|
|
| MD5 |
898de352ecb386082dd82bf9e9b413f6
|
|
| BLAKE2b-256 |
c0a4547d2af5e2fa989cb9d046c56faa9288b773dd56c93cb6d7db80652c1edc
|
Provenance
The following attestation bundles were made for sqlmodel_object_helpers-0.0.6-py3-none-any.whl:
Publisher:
publish.yml on itstandart/sqlmodel-object-helpers
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sqlmodel_object_helpers-0.0.6-py3-none-any.whl -
Subject digest:
2cf0c0f48df3e895988cc012c6e1166be14bd38fa74c9f14bcb269ae2c4aba08 - Sigstore transparency entry: 1262408159
- Sigstore integration time:
-
Permalink:
itstandart/sqlmodel-object-helpers@0f4b95e2e69352777bc6653f1faeb9d9352800fa -
Branch / Tag:
refs/tags/v0.0.6 - Owner: https://github.com/itstandart
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@0f4b95e2e69352777bc6653f1faeb9d9352800fa -
Trigger Event:
push
-
Statement type: