Production-ready multi-cloud database abstraction layer with connection pooling, retry logic, and thread safety
Project description
PolyDB v2.2.0 - Enterprise Database Abstraction Layer
Production-ready, cloud-independent database abstraction with full LINQ support, field-level audit, cache, and overflow storage
Features
✅ LINQ-Style Queries - All SQL clauses: WHERE, ORDER BY, SELECT, GROUP BY, DISTINCT, COUNT
✅ Multi-Cloud - Azure, AWS, GCP, Vercel, MongoDB, PostgreSQL
✅ Automatic Overflow - Large NoSQL records → Object Storage (transparent)
✅ Enterprise Audit - Cryptographic hash chain, field-level changes, strict ordering
✅ Cache Engine - In-memory with TTL, automatic invalidation
✅ Soft Delete - Optional logical deletion with audit trail
✅ Auto-Inject - Tenant ID, audit fields (created_at, updated_by, etc.)
✅ Thread-Safe - Connection pooling, distributed-safe hash chaining
✅ Retry Logic - Exponential backoff, configurable
✅ Type-Safe - Protocol-based adapters, full type hints
Quick Start
from polydb import DatabaseFactory, polydb_model, QueryBuilder, Operator, AuditContext
# 1. Define model
@polydb_model
class User:
__polydb__ = {
"storage": "sql",
"table": "users",
"cache": True,
"cache_ttl": 600,
}
# 2. Initialize
db = DatabaseFactory(
enable_audit=True,
enable_cache=True,
soft_delete=True,
)
# 3. Set context (per-request)
AuditContext.set(
actor_id="user_123",
roles=["admin"],
tenant_id="tenant_abc",
)
# 4. CRUD with auto-audit
user = db.create(User, {"name": "John", "email": "john@example.com"})
# Auto-injects: tenant_id, created_at, created_by, updated_at, updated_by
# 5. LINQ queries
query = (
QueryBuilder()
.where("role", Operator.EQ, "admin")
.where("age", Operator.GTE, 18)
.order_by("created_at", descending=True)
.skip(10)
.take(20)
.select("id", "name", "email")
)
admins = db.query_linq(User, query)
LINQ Operations
Filters
builder.where("field", Operator.EQ, value) # ==
builder.where("field", Operator.NE, value) # !=
builder.where("field", Operator.GT, value) # >
builder.where("field", Operator.GTE, value) # >=
builder.where("field", Operator.LT, value) # <
builder.where("field", Operator.LTE, value) # <=
builder.where("field", Operator.IN, [1,2,3]) # IN
builder.where("field", Operator.CONTAINS, "text") # LIKE
builder.where("field", Operator.STARTS_WITH, "prefix")
builder.where("field", Operator.ENDS_WITH, "suffix")
Ordering & Pagination
builder.order_by("field", descending=True)
builder.skip(10)
builder.take(20)
Projection
builder.select("id", "name", "email")
Aggregation
builder.count()
builder.distinct_on()
builder.group_by("role", "department")
NoSQL Overflow Storage
Records >1MB automatically stored in object storage:
@polydb_model
class Product:
__polydb__ = {
"storage": "nosql",
"collection": "products",
}
# Large data automatically overflows
product = db.create(Product, {
"data": {"huge": "payload"} * 100000
})
# Transparent retrieval
retrieved = db.read_one(Product, {"id": product["id"]})
# User never knows it came from blob storage
Enterprise Audit Trail
Automatic Tracking
- Who: actor_id, roles
- What: action, model, entity_id, changed_fields
- When: timestamp (microsecond precision)
- Where: tenant_id, ip_address, user_agent
- Context: trace_id, request_id
- Integrity: cryptographic hash chain
Field-Level Changes
db.update(User, user_id, {"email": "new@example.com"})
# Audit log shows: changed_fields = ["email", "updated_at", "updated_by"]
Verify Chain
from polydb.audit.storage import AuditStorage
audit = AuditStorage()
is_valid = audit.verify_chain(tenant_id="tenant_abc")
Cache Engine
# Auto-cache from model metadata
@polydb_model
class User:
__polydb__ = {
"storage": "sql",
"table": "users",
"cache": True,
"cache_ttl": 600, # 10 minutes
}
# Cached read
users = db.read(User, {"role": "admin"})
# Bypass cache
users = db.read(User, {"role": "admin"}, no_cache=True)
# Manual invalidation
from polydb.cache import RedisCacheEngine
cache = RedisCacheEngine()
cache.invalidate("User")
cache.clear()
Soft Delete
db = DatabaseFactory(soft_delete=True)
# Soft delete (sets deleted_at, deleted_by)
db.delete(User, user_id)
# Hard delete
db.delete(User, user_id, hard=True)
# Include deleted in queries
all_users = db.read(User, {}, include_deleted=True)
Pagination
page1, next_token = db.read_page(User, {"role": "admin"}, page_size=50)
page2, token2 = db.read_page(User, {"role": "admin"}, page_size=50, continuation_token=next_token)
Multi-Tenant
# Auto-inject tenant_id from context
AuditContext.set(tenant_id="tenant_123", actor_id="user_456", roles=["admin"])
user = db.create(User, {"name": "John"})
# Result: {"name": "John", "tenant_id": "tenant_123", "created_by": "user_456", ...}
# Filter by tenant (automatic)
users = db.read(User, {}) # Only returns tenant_123 records
Environment Variables
# Provider selection (optional, auto-detected)
CLOUD_PROVIDER=aws|azure|gcp|vercel|mongodb|postgresql
# PostgreSQL
POSTGRES_CONNECTION_STRING=postgresql://user:pass@host:5432/db
POSTGRES_MIN_CONNECTIONS=2
POSTGRES_MAX_CONNECTIONS=20
# AWS
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
DYNAMODB_TABLE_NAME=...
S3_BUCKET_NAME=...
# Azure
AZURE_STORAGE_CONNECTION_STRING=...
AZURE_TABLE_NAME=...
AZURE_CONTAINER_NAME=...
# GCP
GOOGLE_CLOUD_PROJECT=...
FIRESTORE_COLLECTION=...
GCS_BUCKET_NAME=...
# MongoDB
MONGODB_URI=mongodb://...
MONGODB_DATABASE=...
MONGODB_COLLECTION=...
# Vercel
KV_URL=...
KV_REST_API_TOKEN=...
BLOB_READ_WRITE_TOKEN=...
Model Metadata
@polydb_model
class Entity:
__polydb__ = {
# Required
"storage": "sql" | "nosql",
# SQL
"table": "table_name",
# NoSQL
"collection": "collection_name",
"pk_field": "partition_key_field",
"rk_field": "row_key_field",
# Cache
"cache": True,
"cache_ttl": 300, # seconds
# Optional
"provider": "aws", # Override auto-detection
}
Thread Safety
- Connection pooling with locks
- Distributed-safe audit hash chaining
- Cache with thread-safe operations
- Retry logic with exponential backoff
Performance
- SQL: Connection pooling (min=2, max=20)
- NoSQL: Client reuse, overflow storage
- Cache: In-memory with automatic invalidation
- Retry: Configurable backoff (0.5s-6s)
Production Checklist
✅ Set CLOUD_PROVIDER explicitly
✅ Configure connection pool sizes
✅ Enable audit (enable_audit=True)
✅ Set cache TTL per model
✅ Use soft delete for compliance
✅ Set audit context per request
✅ Monitor audit chain integrity
✅ Configure retry attempts
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file altcodepro_polydb_python-2.2.0.tar.gz.
File metadata
- Download URL: altcodepro_polydb_python-2.2.0.tar.gz
- Upload date:
- Size: 56.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
845c0ef77b2ff01108da2f700c5a7c941d8845e3718c7f6915db0fb6d6e7b061
|
|
| MD5 |
e96df8223a3a43d497a546e292a6db29
|
|
| BLAKE2b-256 |
542c5c0e06956ae7980c57359b6d7fe1a6fa3cf34403a28b5b5798cd3224ad84
|
File details
Details for the file altcodepro_polydb_python-2.2.0-py3-none-any.whl.
File metadata
- Download URL: altcodepro_polydb_python-2.2.0-py3-none-any.whl
- Upload date:
- Size: 67.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0c22428ac694b8f5c5eb4d6dc9e12dee7e6ee6fbd97c43607bb655690e259192
|
|
| MD5 |
a0530c08bc73d727a61dc1d24ba0b765
|
|
| BLAKE2b-256 |
04e77b6dab16362ac8daf3c21fcde4d2a935cb89d92fe42bafd4cf8b0a647083
|