Skip to main content

Amazon Aurora DSQL dialect for SQLAlchemy

Project description

Amazon Aurora DSQL dialect for SQLAlchemy

Introduction

The Aurora DSQL dialect for SQLAlchemy provides integration between SQLAlchemy ORM and Aurora DSQL. This dialect enables Python applications to leverage SQLAlchemy's powerful object-relational mapping capabilities while taking advantage of Aurora DSQL's distributed architecture and high availability.

Sample Application

There is an included sample application in examples/pet-clinic-app that shows how to use Aurora DSQL with SQLAlchemy. To run the included example please refer to the sample README.

Prerequisites

  • Python 3.10 or higher
  • SQLAlchemy 2.0.0 or higher
  • One of the following drivers:
    • psycopg 3.2.0 or higher
    • psycopg2 2.9.0 or higher

Installation

Install the packages using the commands below:

pip install aurora-dsql-sqlalchemy

# driver installation (in case you opt for psycopg)
pip install psycopg-binary

# driver installation (in case you opt for psycopg2)
pip install psycopg2-binary

Usage

After installation, you can connect to an Aurora DSQL cluster using SQLAlchemy's create_engine:

from sqlalchemy import create_engine
from sqlalchemy.engine.url import URL

url = URL.create(
    "auroradsql+psycopg",
    username=admin,
    host=<CLUSTER_END_POINT>,
    password=<CLUSTER_TOKEN>,
    database='postgres',
    query={
        # (optional) If sslmode is 'verify-full' then use sslrootcert
        # variable to set the path to server root certificate
        # If no path is provided, the adapter looks into system certs
        # NOTE: Do not use it with 'sslmode': 'require'
        'sslmode': 'verify-full',
        'sslrootcert': '<ROOT_CERT_PATH>'
    }
)

engine = create_engine(url)

The connection string "auroradsql+psycopg" specifies to use the auroradsql dialect with the driver psycopg (psycopg3). To use the driver psycopg2, change the connection string to "auroradsql+psycopg2".

Note: Each connection has a maximum duration limit. See the Maximum connection duration time limit in the Cluster quotas and database limits in Amazon Aurora DSQL page.

Best Practices

Primary Key Generation

SQLAlchemy applications connecting to Aurora DSQL should use UUID for the primary key column since auto-incrementing integer keys (sequences or serial) are not supported in DSQL. The following column definition can be used to define an UUID primary key column.

Column(
    "id",
    UUID(as_uuid=True),
    primary_key=True,
    default=text('gen_random_uuid()')
)

gen_random_uuid() returns an UUID version 4 as the default value.

Dialect Features and Limitations

  • Column Metadata: The dialect fixes an issue related to "datatype json not supported" when calling SQLAlchemy's metadata() API.

  • Foreign Keys: Aurora DSQL does not support foreign key constraints. The dialect disables these constraints, but be aware that referential integrity must be maintained at the application level.

  • Index Creation: Aurora DSQL does not support CREATE INDEX or CREATE UNIQUE INDEX commands. The dialect instead uses CREATE INDEX ASYNC and CREATE UNIQUE INDEX ASYNC commands. See the Asynchronous indexes in Aurora DSQL page for more information.

    The following parameters are used for customizing index creation

    • auroradsql_include - specifies which columns to includes in an index by using the INCLUDE clause:

      Index(
          "include_index",
          table.c.id,
          auroradsql_include=['name', 'email']
      )
      

      Generated SQL output:

      CREATE INDEX ASYNC include_index ON table (id) INCLUDE (name, email)
      
    • auroradsql_nulls_not_distinct - controls how NULL values are treated in unique indexes:

      Index(
          "idx_name",
          table.c.column,
          unique=True,
          auroradsql_nulls_not_distinct=True
      )
      

      Generated SQL output:

      CREATE UNIQUE INDEX idx_name ON table (column) NULLS NOT DISTINCT
      
  • Index Interface Limitation: NULLS FIRST | LAST - SQLalchemy's Index() interface does not have a way to pass in the sort order of null and non-null columns. (Default: NULLS LAST). If NULLS FIRST is required, please refer to the syntax as specified in Asynchronous indexes in Aurora DSQL and execute the corresponding SQL query directly in SQLAlchemy.

  • Psycopg (psycopg3) support: When connecting to DSQL using the default postgresql dialect with psycopg, an unsupported SAVEPOINT error occurs. The DSQL dialect addresses this issue by disabling the SAVEPOINT during connection.

Developer instructions

Instructions on how to build and test the dialect are available in the Developer Instructions.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aurora_dsql_sqlalchemy-1.0.1.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aurora_dsql_sqlalchemy-1.0.1-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file aurora_dsql_sqlalchemy-1.0.1.tar.gz.

File metadata

  • Download URL: aurora_dsql_sqlalchemy-1.0.1.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for aurora_dsql_sqlalchemy-1.0.1.tar.gz
Algorithm Hash digest
SHA256 5a4adce75aadaeecb89d8c0982729ef30a32c528b97c74434d390363e5769765
MD5 a9411e33d71394a96f14017592bd3035
BLAKE2b-256 56a73152952aade862c4f7a50f8539c1447e560c9daae0dfaa7588bf4cbb131f

See more details on using hashes here.

File details

Details for the file aurora_dsql_sqlalchemy-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for aurora_dsql_sqlalchemy-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0417e50ac7776988560c118b8ba2cf091a81bdb08dfa9ff310f27447b76ded82
MD5 2ce5e347fee266594bc34143afe39832
BLAKE2b-256 47511db3add6675d05b70274dc1f204d0ef96b93d89d18e1fcc25871bad7f1d6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page