Skip to main content

A rapid application development library for interfacing with data storage

Project description

Velocity-Python

A rapid application development library for Python that eliminates boilerplate between your code and your database. Write business logic, not SQL plumbing.

@engine.transaction
def create_order(tx, customer_email, items):
    customer = tx.table("customers").find({"email": customer_email})
    order = tx.table("orders").insert({
        "customer_id": customer["sys_id"],
        "status": "pending",
        "total": sum(i["price"] for i in items),
    })
    tx.table("order_items").insert_many([
        {"order_id": order["sys_id"], "product": i["name"], "price": i["price"]}
        for i in items
    ])
    return order

No connection management, no cursor juggling, no commit/rollback boilerplate. Velocity handles it all.

Python 3.9+ License: MIT


Why Velocity?

Most Python database libraries fall into two camps:

  1. Heavy ORMs (SQLAlchemy, Django ORM) — powerful but complex. You write Python classes that map to tables, manage sessions, deal with migration frameworks, and learn a large API surface before writing your first query.

  2. Raw drivers (psycopg, sqlite3) — full control, but you're writing SQL strings, managing connections, handling cursors, serializing parameters, and building your own transaction/error-handling patterns from scratch.

Velocity occupies the middle ground: a thin, opinionated layer that gives you the convenience of an ORM with the transparency of raw SQL. Tables are just names. Rows are just dicts. Transactions are just context managers. You don't define models — Velocity discovers your schema at runtime and adapts to it.

Design Principles

Principle What It Means
Convention over configuration Sensible defaults everywhere. Override only what you need.
Dicts in, dicts out No custom model classes to learn. Rows are dictionaries.
Transaction-scoped Every operation runs inside an explicit transaction. No surprise autocommit.
Auto-schema Tables and columns are created on the fly in development. Locked down in production.
Driver-agnostic PostgreSQL (primary), MySQL, SQLite, SQL Server — same API surface.
Lambda-native Connection pooling, warm-start reuse, and SQS batch handling built in.

Installation

# Core (no database driver — useful for testing or SQLite)
pip install velocity-python

# PostgreSQL (recommended)
pip install velocity-python[postgres]

# With AWS Lambda support
pip install velocity-python[postgres,aws]

# Everything
pip install velocity-python[all]

Available Extras

Extra Packages Use Case
postgres psycopg[binary]>=3.2.0 PostgreSQL connections
aws boto3, requests Lambda handlers, SQS, Amplify
excel openpyxl Excel export
templates jinja2 Template rendering
http requests HTTP utilities
payment stripe, braintree Payment processing
mysql mysql-connector-python MySQL connections
sqlserver python-tds SQL Server connections
all All of the above Full install

Requires Python 3.9+ and uses psycopg v3 (not psycopg2) for PostgreSQL.


Quick Start

1. Connect

from velocity.db.servers.postgres import initialize

# From environment variables (DBHost, DBDatabase, DBUser, DBPassword)
engine = initialize()

# Or explicit config
engine = initialize(config={
    "host": "localhost",
    "dbname": "myapp",
    "user": "postgres",
    "password": "secret",
})

2. Use Transactions

# As a decorator (recommended for Lambda handlers)
@engine.transaction
def get_active_users(tx):
    return tx.table("users").select(where={"active": True}).all()

# As a context manager
with engine.transaction() as tx:
    tx.table("users").insert({"name": "Alice", "email": "alice@example.com"})

3. CRUD Operations

@engine.transaction
def demo(tx):
    users = tx.table("users")

    # Insert
    row = users.insert({"name": "Bob", "email": "bob@example.com"})

    # Read
    user = users.row(row["sys_id"])                    # by primary key
    user = users.find({"email": "bob@example.com"})    # by lookup

    # Update
    user["name"] = "Robert"                            # immediate write-through

    # Delete
    users.delete({"sys_id": row["sys_id"]})

4. Bulk Operations

@engine.transaction
def import_customers(tx, records):
    tx.table("customers").insert_many(records)              # multi-row INSERT
    tx.table("customers").upsert_many(records, pk="email")  # INSERT ... ON CONFLICT UPDATE

Documentation

Full documentation is included in the docs/ directory of the source distribution.

Guide File Description
Database ORM docs/database.md Connections, transactions, tables, rows, results, queries, schema management
Performance & Optimization docs/performance.md Connection pooling, batch operations, query caching, prepared statements, N+1 prevention, observability
Async Support docs/async.md AsyncTransaction, AsyncTable, AsyncResult, parallel queries with gather()
Configuration Reference docs/configuration.md All environment variables, engine options, and connection settings
AWS Lambda Handlers docs/aws-handlers.md LambdaHandler, SqsHandler, auth modes, per-record transactions
Payment Processing docs/payment.md Stripe and Braintree adapters, payment lifecycle
Utilities docs/utilities.md Excel export, data conversion, formatting, timers, email parsing
Testing Guide docs/TESTING.md Running tests, markers, coverage
Security docs/SECURITY.md Pre-commit hooks, credential scanning

Architecture

Engine (singleton — survives Lambda warm starts)
├── ConnectionPool (thread-safe, configurable min/max)
└── Transaction (one per request, borrows from pool)
     ├── Table (CRUD, batch ops, schema management)
     │    ├── Row (dict-like, lazy-cache, write-through, batch_update)
     │    └── Result (streaming cursor iteration, transforms)
     ├── View (create, grant, ensure)
     └── Sequence (create, next, current, configure)

Transactions auto-commit on success, auto-rollback on exception. Connections are returned to the pool (or discarded on error). The Engine persists across Lambda invocations, so the pool stays warm.


Multi-Database Support

Database Driver Status
PostgreSQL psycopg[binary]>=3.2.0 Primary, fully tested
MySQL mysql-connector-python Supported
SQLite sqlite3 (stdlib) Supported
SQL Server python-tds Supported
# PostgreSQL
from velocity.db.servers.postgres import initialize
engine = initialize()

# MySQL
from velocity.db.servers.mysql import initialize
engine = initialize()

# SQLite
from velocity.db.servers.sqlite import initialize
engine = initialize(config={"database": "myapp.db"})

# SQL Server
from velocity.db.servers.mssql import initialize
engine = initialize()

Project Structure

velocity-python/
├── src/velocity/
│   ├── db/
│   │   ├── core/
│   │   │   ├── engine.py         # Engine, ConnectionPool
│   │   │   ├── transaction.py    # Transaction, query timing, caching
│   │   │   ├── table.py          # Table CRUD, batch ops, schema
│   │   │   ├── row.py            # Row (dict-like ORM object)
│   │   │   ├── result.py         # Result (cursor wrapper, transforms)
│   │   │   ├── async_support.py  # Async versions of core classes
│   │   │   ├── view.py           # View management
│   │   │   ├── sequence.py       # Sequence management
│   │   │   └── decorators.py     # @create_missing, @return_default, etc.
│   │   └── servers/
│   │       ├── postgres/         # PostgreSQL dialect + initializer
│   │       ├── mysql/            # MySQL dialect
│   │       ├── sqlite/           # SQLite dialect
│   │       └── mssql/            # SQL Server dialect
│   ├── aws/
│   │   └── handlers/
│   │       ├── lambda_handler.py # HTTP Lambda handler
│   │       └── sqs_handler.py    # SQS batch handler
│   ├── payment/
│   │   ├── base_adapter.py       # Abstract payment interface
│   │   ├── stripe_adapter.py     # Stripe implementation
│   │   └── braintree_adapter.py  # Braintree implementation
│   └── misc/                     # Utility modules
├── tests/                        # 400+ unit tests
├── docs/                         # Detailed documentation
└── pyproject.toml

Development

# Install with dev dependencies
pip install -e ".[dev,test,postgres]"

# Run tests
pytest

# Run with coverage
pytest --cov=velocity --cov-report=html

# Run specific test file
pytest tests/test_connection_pool.py -v

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

velocity_python-0.1.16.tar.gz (296.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

velocity_python-0.1.16-py3-none-any.whl (289.7 kB view details)

Uploaded Python 3

File details

Details for the file velocity_python-0.1.16.tar.gz.

File metadata

  • Download URL: velocity_python-0.1.16.tar.gz
  • Upload date:
  • Size: 296.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for velocity_python-0.1.16.tar.gz
Algorithm Hash digest
SHA256 9258f8af5579ad72144c10212084f6de2e416a1e277af00493c5cdd9c2d572db
MD5 30b39fffc4bd56eeff16c5fb0029e8b5
BLAKE2b-256 9bf10664cfcf12e42f6962b8511edacfc2ca4cf0d983f72b125e0b641722894e

See more details on using hashes here.

File details

Details for the file velocity_python-0.1.16-py3-none-any.whl.

File metadata

File hashes

Hashes for velocity_python-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 78aafa5176458c1dc4e489a6ede95370a9e87c0a021a8d15f171a6ec1a77ec32
MD5 f50604ff6144e345ddb630f6abdeee98
BLAKE2b-256 350f035e1608cf2cfd34a47b7b67657caa109d7dff19ece199a8f87f25790cc0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page