Skip to main content

A rapid application development library for interfacing with data storage

Project description

Velocity-Python

A rapid application development library for Python that eliminates boilerplate between your code and your database. Write business logic, not SQL plumbing.

@engine.transaction
def create_order(tx, customer_email, items):
    customer = tx.table("customers").find({"email": customer_email})
    order = tx.table("orders").insert({
        "customer_id": customer["sys_id"],
        "status": "pending",
        "total": sum(i["price"] for i in items),
    })
    tx.table("order_items").insert_many([
        {"order_id": order["sys_id"], "product": i["name"], "price": i["price"]}
        for i in items
    ])
    return order

No connection management, no cursor juggling, no commit/rollback boilerplate. Velocity handles it all.

Python 3.9+ License: MIT


Why Velocity?

Most Python database libraries fall into two camps:

  1. Heavy ORMs (SQLAlchemy, Django ORM) — powerful but complex. You write Python classes that map to tables, manage sessions, deal with migration frameworks, and learn a large API surface before writing your first query.

  2. Raw drivers (psycopg, sqlite3) — full control, but you're writing SQL strings, managing connections, handling cursors, serializing parameters, and building your own transaction/error-handling patterns from scratch.

Velocity occupies the middle ground: a thin, opinionated layer that gives you the convenience of an ORM with the transparency of raw SQL. Tables are just names. Rows are just dicts. Transactions are just context managers. You don't define models — Velocity discovers your schema at runtime and adapts to it.

Design Principles

Principle What It Means
Convention over configuration Sensible defaults everywhere. Override only what you need.
Dicts in, dicts out No custom model classes to learn. Rows are dictionaries.
Transaction-scoped Every operation runs inside an explicit transaction. No surprise autocommit.
Auto-schema Tables and columns are created on the fly in development. Locked down in production.
Driver-agnostic PostgreSQL (primary), MySQL, SQLite, SQL Server — same API surface.
Lambda-native Connection pooling, warm-start reuse, and SQS batch handling built in.

Installation

# Core (no database driver — useful for testing or SQLite)
pip install velocity-python

# PostgreSQL (recommended)
pip install velocity-python[postgres]

# With AWS Lambda support
pip install velocity-python[postgres,aws]

# Everything
pip install velocity-python[all]

Available Extras

Extra Packages Use Case
postgres psycopg[binary]>=3.2.0 PostgreSQL connections
aws boto3, requests Lambda handlers, SQS, Amplify
excel openpyxl Excel export
templates jinja2 Template rendering
http requests HTTP utilities
payment stripe, braintree Payment processing
mysql mysql-connector-python MySQL connections
sqlserver python-tds SQL Server connections
all All of the above Full install

Requires Python 3.9+ and uses psycopg v3 (not psycopg2) for PostgreSQL.


Quick Start

1. Connect

from velocity.db.servers.postgres import initialize

# From environment variables (DBHost, DBDatabase, DBUser, DBPassword)
engine = initialize()

# Or explicit config
engine = initialize(config={
    "host": "localhost",
    "dbname": "myapp",
    "user": "postgres",
    "password": "secret",
})

2. Use Transactions

# As a decorator (recommended for Lambda handlers)
@engine.transaction
def get_active_users(tx):
    return tx.table("users").select(where={"active": True}).all()

# As a context manager
with engine.transaction() as tx:
    tx.table("users").insert({"name": "Alice", "email": "alice@example.com"})

3. CRUD Operations

@engine.transaction
def demo(tx):
    users = tx.table("users")

    # Insert
    row = users.insert({"name": "Bob", "email": "bob@example.com"})

    # Read
    user = users.row(row["sys_id"])                    # by primary key
    user = users.find({"email": "bob@example.com"})    # by lookup

    # Update
    user["name"] = "Robert"                            # immediate write-through

    # Delete
    users.delete({"sys_id": row["sys_id"]})

4. Bulk Operations

@engine.transaction
def import_customers(tx, records):
    tx.table("customers").insert_many(records)              # multi-row INSERT
    tx.table("customers").upsert_many(records, pk="email")  # INSERT ... ON CONFLICT UPDATE

Documentation

Detailed guides with real-world examples:

Guide Description
Database ORM Connections, transactions, tables, rows, results, queries, schema management
Performance & Optimization Connection pooling, batch operations, query caching, prepared statements, N+1 prevention, observability
Async Support AsyncTransaction, AsyncTable, AsyncResult, parallel queries with gather()
Configuration Reference All environment variables, engine options, and connection settings
AWS Lambda Handlers LambdaHandler, SqsHandler, auth modes, per-record transactions
Payment Processing Stripe and Braintree adapters, payment lifecycle
Utilities Excel export, data conversion, formatting, timers, email parsing

Additional Docs

Doc Description
Testing Guide Running tests, markers, coverage
Security Pre-commit hooks, credential scanning
AWS Auth Modes Cognito auth, public endpoints, webhooks

Architecture

Engine (singleton — survives Lambda warm starts)
├── ConnectionPool (thread-safe, configurable min/max)
└── Transaction (one per request, borrows from pool)
     ├── Table (CRUD, batch ops, schema management)
     │    ├── Row (dict-like, lazy-cache, write-through, batch_update)
     │    └── Result (streaming cursor iteration, transforms)
     ├── View (create, grant, ensure)
     └── Sequence (create, next, current, configure)

Transactions auto-commit on success, auto-rollback on exception. Connections are returned to the pool (or discarded on error). The Engine persists across Lambda invocations, so the pool stays warm.


Multi-Database Support

Database Driver Status
PostgreSQL psycopg[binary]>=3.2.0 Primary, fully tested
MySQL mysql-connector-python Supported
SQLite sqlite3 (stdlib) Supported
SQL Server python-tds Supported
# PostgreSQL
from velocity.db.servers.postgres import initialize
engine = initialize()

# MySQL
from velocity.db.servers.mysql import initialize
engine = initialize()

# SQLite
from velocity.db.servers.sqlite import initialize
engine = initialize(config={"database": "myapp.db"})

# SQL Server
from velocity.db.servers.mssql import initialize
engine = initialize()

Project Structure

velocity-python/
├── src/velocity/
│   ├── db/
│   │   ├── core/
│   │   │   ├── engine.py         # Engine, ConnectionPool
│   │   │   ├── transaction.py    # Transaction, query timing, caching
│   │   │   ├── table.py          # Table CRUD, batch ops, schema
│   │   │   ├── row.py            # Row (dict-like ORM object)
│   │   │   ├── result.py         # Result (cursor wrapper, transforms)
│   │   │   ├── async_support.py  # Async versions of core classes
│   │   │   ├── view.py           # View management
│   │   │   ├── sequence.py       # Sequence management
│   │   │   └── decorators.py     # @create_missing, @return_default, etc.
│   │   └── servers/
│   │       ├── postgres/         # PostgreSQL dialect + initializer
│   │       ├── mysql/            # MySQL dialect
│   │       ├── sqlite/           # SQLite dialect
│   │       └── mssql/            # SQL Server dialect
│   ├── aws/
│   │   └── handlers/
│   │       ├── lambda_handler.py # HTTP Lambda handler
│   │       └── sqs_handler.py    # SQS batch handler
│   ├── payment/
│   │   ├── base_adapter.py       # Abstract payment interface
│   │   ├── stripe_adapter.py     # Stripe implementation
│   │   └── braintree_adapter.py  # Braintree implementation
│   └── misc/                     # Utility modules
├── tests/                        # 400+ unit tests
├── docs/                         # Detailed documentation
└── pyproject.toml

Development

# Install with dev dependencies
pip install -e ".[dev,test,postgres]"

# Run tests
pytest

# Run with coverage
pytest --cov=velocity --cov-report=html

# Run specific test file
pytest tests/test_connection_pool.py -v

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

velocity_python-0.1.7.tar.gz (291.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

velocity_python-0.1.7-py3-none-any.whl (292.2 kB view details)

Uploaded Python 3

File details

Details for the file velocity_python-0.1.7.tar.gz.

File metadata

  • Download URL: velocity_python-0.1.7.tar.gz
  • Upload date:
  • Size: 291.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for velocity_python-0.1.7.tar.gz
Algorithm Hash digest
SHA256 354e7c0c88e5858e1cc8ac0c415590c0e4c0f7cfaf76a5354776c6e4a169d547
MD5 7d0ebb49e4255efa3d7176d0a4784f67
BLAKE2b-256 15f603f38662f20a9aacdbad83118b1dff75d5a29eebaa56327c5081f3a38c6b

See more details on using hashes here.

File details

Details for the file velocity_python-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: velocity_python-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 292.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for velocity_python-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 53c9490ef31bd85343bfa66871855c51b16d1340b0361f311fde6083491e53a7
MD5 63edd8402e9f56cd8c1a9962ad1d48d5
BLAKE2b-256 a0160055573f8b3fa3effab326543a19ec3e8a2acd15f257e43ecfd2ef47ee77

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page