Skip to main content

Production-ready multi-cloud database abstraction layer with connection pooling, retry logic, and thread safety

Project description

PolyDB v2.2.2 - Enterprise Database Abstraction Layer

Production-ready, cloud-independent database abstraction with full LINQ support, field-level audit, cache, and overflow storage

Features

LINQ-Style Queries - All SQL clauses: WHERE, ORDER BY, SELECT, GROUP BY, DISTINCT, COUNT
Multi-Cloud - Azure, AWS, GCP, Vercel, MongoDB, PostgreSQL
Automatic Overflow - Large NoSQL records → Object Storage (transparent)
Enterprise Audit - Cryptographic hash chain, field-level changes, strict ordering
Cache Engine - In-memory with TTL, automatic invalidation
Soft Delete - Optional logical deletion with audit trail
Auto-Inject - Tenant ID, audit fields (created_at, updated_by, etc.)
Thread-Safe - Connection pooling, distributed-safe hash chaining
Retry Logic - Exponential backoff, configurable
Type-Safe - Protocol-based adapters, full type hints

Quick Start

from polydb import DatabaseFactory, polydb_model, QueryBuilder, Operator, AuditContext

# 1. Define model
@polydb_model
class User:
    __polydb__ = {
        "storage": "sql",
        "table": "users",
        "cache": True,
        "cache_ttl": 600,
    }

# 2. Initialize
db = DatabaseFactory(
    enable_audit=True,
    enable_cache=True,
    soft_delete=True,
)

# 3. Set context (per-request)
AuditContext.set(
    actor_id="user_123",
    roles=["admin"],
    tenant_id="tenant_abc",
)

# 4. CRUD with auto-audit
user = db.create(User, {"name": "John", "email": "john@example.com"})
# Auto-injects: tenant_id, created_at, created_by, updated_at, updated_by

# 5. LINQ queries
query = (
    QueryBuilder()
    .where("role", Operator.EQ, "admin")
    .where("age", Operator.GTE, 18)
    .order_by("created_at", descending=True)
    .skip(10)
    .take(20)
    .select("id", "name", "email")
)

admins = db.query_linq(User, query)

LINQ Operations

Filters

builder.where("field", Operator.EQ, value)      # ==
builder.where("field", Operator.NE, value)      # !=
builder.where("field", Operator.GT, value)      # >
builder.where("field", Operator.GTE, value)     # >=
builder.where("field", Operator.LT, value)      # <
builder.where("field", Operator.LTE, value)     # <=
builder.where("field", Operator.IN, [1,2,3])    # IN
builder.where("field", Operator.CONTAINS, "text") # LIKE
builder.where("field", Operator.STARTS_WITH, "prefix")
builder.where("field", Operator.ENDS_WITH, "suffix")

Ordering & Pagination

builder.order_by("field", descending=True)
builder.skip(10)
builder.take(20)

Projection

builder.select("id", "name", "email")

Aggregation

builder.count()
builder.distinct_on()
builder.group_by("role", "department")

NoSQL Overflow Storage

Records >1MB automatically stored in object storage:

@polydb_model
class Product:
    __polydb__ = {
        "storage": "nosql",
        "collection": "products",
    }

# Large data automatically overflows
product = db.create(Product, {
    "data": {"huge": "payload"} * 100000
})

# Transparent retrieval
retrieved = db.read_one(Product, {"id": product["id"]})
# User never knows it came from blob storage

Enterprise Audit Trail

Automatic Tracking

  • Who: actor_id, roles
  • What: action, model, entity_id, changed_fields
  • When: timestamp (microsecond precision)
  • Where: tenant_id, ip_address, user_agent
  • Context: trace_id, request_id
  • Integrity: cryptographic hash chain

Field-Level Changes

db.update(User, user_id, {"email": "new@example.com"})
# Audit log shows: changed_fields = ["email", "updated_at", "updated_by"]

Verify Chain

from polydb.audit.storage import AuditStorage

audit = AuditStorage()
is_valid = audit.verify_chain(tenant_id="tenant_abc")

Cache Engine

# Auto-cache from model metadata
@polydb_model
class User:
    __polydb__ = {
        "storage": "sql",
        "table": "users",
        "cache": True,
        "cache_ttl": 600,  # 10 minutes
    }

# Cached read
users = db.read(User, {"role": "admin"})

# Bypass cache
users = db.read(User, {"role": "admin"}, no_cache=True)

# Manual invalidation
from polydb.cache import RedisCacheEngine
cache = RedisCacheEngine()
cache.invalidate("User")
cache.clear()

Soft Delete

db = DatabaseFactory(soft_delete=True)

# Soft delete (sets deleted_at, deleted_by)
db.delete(User, user_id)

# Hard delete
db.delete(User, user_id, hard=True)

# Include deleted in queries
all_users = db.read(User, {}, include_deleted=True)

Pagination

page1, next_token = db.read_page(User, {"role": "admin"}, page_size=50)
page2, token2 = db.read_page(User, {"role": "admin"}, page_size=50, continuation_token=next_token)

Multi-Tenant

# Auto-inject tenant_id from context
AuditContext.set(tenant_id="tenant_123", actor_id="user_456", roles=["admin"])

user = db.create(User, {"name": "John"})
# Result: {"name": "John", "tenant_id": "tenant_123", "created_by": "user_456", ...}

# Filter by tenant (automatic)
users = db.read(User, {})  # Only returns tenant_123 records

Environment Variables

# Provider selection (optional, auto-detected)
CLOUD_PROVIDER=aws|azure|gcp|vercel|mongodb|postgresql

# PostgreSQL
POSTGRES_CONNECTION_STRING=postgresql://user:pass@host:5432/db
POSTGRES_MIN_CONNECTIONS=2
POSTGRES_MAX_CONNECTIONS=20

# AWS
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
DYNAMODB_TABLE_NAME=...
S3_BUCKET_NAME=...

# Azure
AZURE_STORAGE_CONNECTION_STRING=...
AZURE_TABLE_NAME=...
AZURE_CONTAINER_NAME=...

# GCP
GOOGLE_CLOUD_PROJECT=...
FIRESTORE_COLLECTION=...
GCS_BUCKET_NAME=...

# MongoDB
MONGODB_URI=mongodb://...
MONGODB_DATABASE=...
MONGODB_COLLECTION=...

# Vercel
KV_URL=...
KV_REST_API_TOKEN=...
BLOB_READ_WRITE_TOKEN=...

Model Metadata

@polydb_model
class Entity:
    __polydb__ = {
        # Required
        "storage": "sql" | "nosql",
        
        # SQL
        "table": "table_name",
        
        # NoSQL
        "collection": "collection_name",
        "pk_field": "partition_key_field",
        "rk_field": "row_key_field",
        
        # Cache
        "cache": True,
        "cache_ttl": 300,  # seconds
        
        # Optional
        "provider": "aws",  # Override auto-detection
    }

Thread Safety

  • Connection pooling with locks
  • Distributed-safe audit hash chaining
  • Cache with thread-safe operations
  • Retry logic with exponential backoff

Performance

  • SQL: Connection pooling (min=2, max=20)
  • NoSQL: Client reuse, overflow storage
  • Cache: In-memory with automatic invalidation
  • Retry: Configurable backoff (0.5s-6s)

Production Checklist

✅ Set CLOUD_PROVIDER explicitly
✅ Configure connection pool sizes
✅ Enable audit (enable_audit=True)
✅ Set cache TTL per model
✅ Use soft delete for compliance
✅ Set audit context per request
✅ Monitor audit chain integrity
✅ Configure retry attempts

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

altcodepro_polydb_python-2.2.2.tar.gz (56.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

altcodepro_polydb_python-2.2.2-py3-none-any.whl (67.2 kB view details)

Uploaded Python 3

File details

Details for the file altcodepro_polydb_python-2.2.2.tar.gz.

File metadata

  • Download URL: altcodepro_polydb_python-2.2.2.tar.gz
  • Upload date:
  • Size: 56.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for altcodepro_polydb_python-2.2.2.tar.gz
Algorithm Hash digest
SHA256 0a3ed7f1769df167fa9f53c5bc2a498137bcf801005741b02a8aa1da5417e6db
MD5 344c960588c22b571f0092ffac390eaf
BLAKE2b-256 beeefd7eedabb1341a2d12a0fb0127d71f0b8a5a7ba01bd36972db7a592b9d69

See more details on using hashes here.

File details

Details for the file altcodepro_polydb_python-2.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for altcodepro_polydb_python-2.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a3be27d73671f73d879aac8c65d6a3b023becf5d45caa6c3313cc14e3c27bd3e
MD5 3115ca7d1bdac94366bfd8c220e975d3
BLAKE2b-256 9fae5569344dcf820399430b8a3c8dfdcca836d0bc6f0e7647de77d3a75eefb7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page