Virtual filesystem for SQLite to read from and write to S3
Project description
sqlite-s3vfs
Python virtual filesystem for SQLite to read from and write to S3.
No locking is performed, so client code must ensure that writes do not overlap with other writes or reads. If multiple writes happen at the same time, the database will probably become corrupt and data be lost.
Based on simonwo's gist, and inspired by phiresky's sql.js-httpvfs, dacort's Stack Overflow answer, and michalc's sqlite-s3-query.
How does it work?
sqlite-s3vfs stores the SQLite database in fixed-sized blocks, and each is stored as a separate object in S3. SQLite stores its data in fixed-size pages, and always writes exactly a page at a time. This virtual filesystem translates page reads and writes to block reads and writes. In the case of SQLite pages being the same size as blocks, which is the case by default, each page write results in exactly one block write.
Separate objects are required since S3 does not support the partial replace of an object; to change even 1 byte, it must be re-uploaded in full.
Installation
sqlite-s3vfs can be installed from PyPI using pip
.
pip install sqlite-s3vfs
This will automatically install boto3, APSW, and any of their dependencies.
Usage
sqlite-s3vfs is an APSW virtual filesystem that requires boto3 for its communication with S3.
import apsw
import boto3
import sqlite_s3vfs
# A boto3 bucket resource
bucket = boto3.Session().resource('s3').Bucket('my-bucket')
# An S3VFS for that bucket
s3vfs = sqlite_s3vfs.S3VFS(bucket=bucket)
# sqlite-s3vfs stores many objects under this prefix
# Note that it's not typical to start a key prefix with '/'
key_prefix = 'my/path/cool.sqlite'
# Connect, insert data, and query
with apsw.Connection(key_prefix, vfs=s3vfs.name) as db:
cursor = db.cursor()
cursor.execute('''
CREATE TABLE foo(x,y);
INSERT INTO foo VALUES(1,2);
''')
cursor.execute('SELECT * FROM foo;')
print(cursor.fetchall())
See the APSW documentation for more examples.
Serializing (getting a regular SQLite file out of the VFS)
The bytes corresponding to a regular SQLite file can be extracted with the serialize_iter
function, which returns an iterable,
for chunk in s3vfs.serialize_iter(key_prefix=key_prefix):
print(chunk)
or with serialize_fileobj
, which returns a non-seekable file-like object. This can be passed to Boto3's upload_fileobj
method to upload a regular SQLite file to S3.
target_obj = boto3.Session().resource('s3').Bucket('my-target-bucket').Object('target/cool.sqlite')
target_obj.upload_fileobj(s3vfs.serialize_fileobj(key_prefix=key_prefix))
Deserializing (getting a regular SQLite file into the VFS)
# Any iterable that yields bytes can be used. In this example, bytes come from
# a regular SQLite file already in S3
source_obj = boto3.Session().resource('s3').Bucket('my-source-bucket').Object('source/cool.sqlite')
bytes_iter = source_obj.get()['Body'].iter_chunks()
s3vfs.deserialize_iter(key_prefix='my/path/cool.sqlite', bytes_iter=bytes_iter)
Block size and page size
SQLite writes data in pages, which are 4096 bytes by default. sqlite-s3vfs stores data in blocks, which are also 4096 bytes by default. If you change one you should change the other to match for performance reasons.
s3vfs = sqlite_s3vfs.S3VFS(bucket=bucket, block_size=65536)
with apsw.Connection(key_prefix, vfs=s3vfs.name) as db:
cursor = db.cursor()
cursor.execute('''
PRAGMA page_size = 65536;
''')
Tests
The tests require the dev dependencies and MinIO started
pip install -e ".[dev]"
./start-minio.sh
can be run with pytest
pytest
and finally Minio stopped
./stop-minio.sh
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sqlite_s3vfs-0.0.36.tar.gz
.
File metadata
- Download URL: sqlite_s3vfs-0.0.36.tar.gz
- Upload date:
- Size: 6.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bcb8346f34cb15a2030bec3e4a6d0ec9eb18c7e4d2c7a5aa3e938d35d3dce2ad |
|
MD5 | 9a565f5f2c0ae0da9d5d453f9bb0331c |
|
BLAKE2b-256 | a4aca4f8b5bf3d8e07e7d5b15c821796463d94ffbf79424d64efe986a235d481 |
File details
Details for the file sqlite_s3vfs-0.0.36-py3-none-any.whl
.
File metadata
- Download URL: sqlite_s3vfs-0.0.36-py3-none-any.whl
- Upload date:
- Size: 5.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d47021fdd7b1d68fba01781dcdd7ddd12de4bb817d38601cf64b37a5032cb51b |
|
MD5 | e98ff1286346fb27761d2cc843411c28 |
|
BLAKE2b-256 | 6d9ed8d1f7b65e7591ef2518465b0622114e02673ef43e2479629332a2dc1bd0 |