Skip to main content

A library extend sqlalchemy module, makes CRUD easier.

Project description

Documentation Status!--None.svg?style=social!--None.svg?style=social

Welcome to sqlalchemy_mate Documentation

A sweet syntax sugar library simplify your in writing sqlalchemy code.

📔 Full document is HERE


Read Database Credential Safely

Put your database connection credential in your source code is always a BAD IDEA.

sqlalchemy_mate provides several options to allow loading credential easily.

If you want to read db secret from other source, such as Bash Scripts that having lots of export DB_PASSWORD="xxx", AWS Secret Manager, AWS Key Management System (KMS), please take a look at my another project pysecret.

From json file

You can put your credential in a json file somewhere in your $HOME directory, and let sqlalchemy_mate smartly load from it.

You need to specify two things:

  1. path to json file.

  2. field path to the data. If your connect info is nested deeply in the json, you can use the dot notation json path to point to it.

content of json:

    "credentials": {
        "db1": {
            "host": "",
            "port": 1234,
            "database": "test",
            "username": "admin",
            "password": "admin",
        "db2": {


from sqlalchemy_mate.api import EngineCreator

ec = EngineCreator.from_json(
    json_path="credentials.db1", # dot notation json path
engine = ec.create_postgresql_pg8000()

Default data fields are host, port, database, username, password.

If your json schema is different, you need to add the key_mapping to specify the field name mapping:

ec = EngineCreator.from_json(
        "host": "your-host-field",
        "port": "your-port-field",
        "database": "your-database-field",
        "username": "your-username-field",
        "password": "your-password-field",
From $HOME/.db.json

You can put lots of database connection info in a .db.json file in your $HOME directory.

from sqlalchemy_mate.api import EngineCreator

ec = EngineCreator.from_home_db_json(identifier="db1")
engine = ec.create_postgresql_psycopg2()

$HOME/.db.json assumes flat json schema, but you can use dot notation json path for identifier to adapt any json schema:

    "identifier1": {
        "host": "",
        "port": 1234,
        "database": "test",
        "username": "admin",
        "password": "admin",
    "identifier2": {
From json file on AWS S3

This is similar to from_json, but the json file is stored on AWS S3.

from sqlalchemy_mate.api import EngineCreator
ec = EngineCreator.from_s3_json(
    bucket_name="my-bucket", key="db.json",
engine = ec.create_redshift()
From Environment Variable

You can put your credentials in Environment Variable. For example:

export DB_DEV_HOST="..."
export DB_DEV_PORT="..."
export DB_DEV_DATABASE="..."
export DB_DEV_USERNAME="..."
export DB_DEV_PASSWORD="..."
from sqlalchemy_mate.api import EngineCreator
ec = EngineCreator.from_env(prefix="DB_DEV")
engine = ec.create_redshift()

If you want to read database credential safely from cloud, for example, AWS EC2, AWS Lambda, you can use AWS KMS to decrypt your credentials

# leave aws_profile=None if you are on cloud
ec = EngineCreator.from_env(prefix="DB_DEV", kms_decrypt=True, aws_profile="xxx")
engine = ec.create_redshift()

Smart Insert

In bulk insert, if there are some rows having primary_key conflict, the classic solution is:

with engine.connect() as conn:
    for row in data:
            conn.execute(table.insert(), row)
        except sqlalchemy.exc.IntegrityError:

It is like one-by-one insert, which is super slow.

sqlalchemy_mate uses smart_insert strategy to try with smaller bulk insert, which has higher probabily to work. As a result, total number of commits are greatly reduced.

With sql expression:

from sqlalchemy_mate.api import inserting
engine = create_engine(...)
t_users = Table(
    "users", metadata,
    Column("id", Integer),
# lots of data
data = [{"id": 1, "name": "Alice}, {"id": 2, "name": "Bob"}, ...]
# the magic function
inserting.smart_insert(engine, t_users, data)

With ORM:

from sqlalchemy_mate.api import ExtendedBase
Base = declarative_base()
class User(Base, ExtendedBase): # inherit from ExtendedBase
# lots of users
data = [User(id=1, name="Alice"), User(id=2, name="Bob"), ...]
# the magic method
User.smart_insert(engine_or_session, data) # That's it

Smart Update / Upsert

Automatically update value by primary key.

# in SQL expression
from sqlalchemy_mate.api import updating

data = [{"id": 1, "name": "Alice}, {"id": 2, "name": "Bob"}, ...]
updating.update_all(engine, table, data)
updating.upsert_all(engine, table, data)

# in ORM
data = [User(id=1, name="Alice"), User(id=2, name="Bob"), ...]
User.update_all(engine_or_session, user_list)
User.upsert_all(engine_or_session, user_list)


sqlalchemy_mate is released on PyPI, so all you need is:

$ pip install sqlalchemy_mate

To upgrade to latest version:

$ pip install --upgrade sqlalchemy_mate

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlalchemy_mate- (54.2 kB view hashes)

Uploaded Source

Built Distribution

sqlalchemy_mate- (59.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page