Skip to main content

SQLite for Serverless Computing

Project description

SQLAlchemy-CloudSQLite

Reuse your SQLite database in serverless applications and reduce prototyping costs.

With this package, you can synchronize your local SQLite database across multiple instances. Different storage vendors can be used and easily added. Commits result in a direct upload of your database. To reduce API calls when reading a database, a cache_duration can be configured.

Info:

Multiple write accesses in parallel can lead to data loss. Ideal for read-only applications. Because the database is transferred as a whole file, large databases can cause latency problems.

Available Integrations

  • AWS S3

Example

Install

pip install sqlalchemy_cloudsqlite

Usage

import sqlalchemy_cloudsqlite
SQLALCHEMY_DATABASE_URI = "cloudsqlite:///quickstart.sqlite"
...
engine = create_engine(SQLALCHEMY_DATABASE_URI)

Configuration

import json
os.environ['config'] = json.dumps(
    {
        'cache_duration': 60,
        'storage': {
            'S3': {'bucket_name': '<BUCKET_NAME>'}
        }
    }
)

and provide your credentials for S3 access via environment variables or a policy.

About

This project is based on the following research:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for sqlalchemy-cloudsqlite, version 0.1.0.dev0
Filename, size File type Python version Upload date Hashes
Filename, size sqlalchemy_cloudsqlite-0.1.0.dev0-py2-none-any.whl (6.0 kB) File type Wheel Python version py2 Upload date Hashes View hashes
Filename, size sqlalchemy_cloudsqlite-0.1.0.dev0.tar.gz (4.0 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page