Skip to main content

Store ZODB blobs in S3-compatible object storage

Project description

zodb-s3blobs

Store ZODB blobs in S3-compatible object storage.

Features

  • Wraps any ZODB base storage (FileStorage, RelStorage, MappingStorage, ...)
  • Works with any S3-compatible service (AWS S3, MinIO, Ceph, DigitalOcean Spaces)
  • Local LRU filesystem cache for fast reads
  • Full ZODB two-phase commit integration (transactional safety)
  • ZConfig integration for zope.conf configuration
  • Supports MVCC storages (new_instance())
  • Garbage collection of orphaned S3 objects during pack()

Installation

pip install zodb-s3blobs

Configuration via zope.conf

Add %import zodb_s3blobs and use the <s3blobstorage> section wrapping any base storage.

With FileStorage

%import zodb_s3blobs

<zodb_db main>
    <s3blobstorage>
        bucket-name my-zodb-blobs
        s3-endpoint-url http://minio:9000
        s3-access-key $S3_ACCESS_KEY
        s3-secret-key $S3_SECRET_KEY
        cache-dir /var/cache/zodb-s3-blobs
        cache-size 2GB
        <filestorage>
            path /var/lib/zodb/Data.fs
        </filestorage>
    </s3blobstorage>
</zodb_db>

ZConfig expands $VARIABLE and ${VARIABLE} from the process environment. For production, consider omitting s3-access-key and s3-secret-key entirely and relying on the boto3 credential chain (IAM roles, instance profiles, ~/.aws/credentials, or the AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY environment variables).

With RelStorage

When wrapping RelStorage, zodb-s3blobs overrides RelStorage's blob handling. Blobs go to S3 instead of the blob_chunk table. RelStorage still handles object data (pickles) in the RDBMS.

%import zodb_s3blobs

<zodb_db main>
    <s3blobstorage>
        bucket-name my-zodb-blobs
        cache-dir /var/cache/zodb-s3-blobs
        cache-size 2GB
        <relstorage>
            <postgresql>
                dsn dbname='zodb' user='zodb' host='localhost'
            </postgresql>
        </relstorage>
    </s3blobstorage>
</zodb_db>

Configuration Reference

Parameter Default Description
bucket-name (required) S3 bucket name
s3-prefix "" Key prefix in bucket
s3-endpoint-url None For MinIO, Ceph, etc.
s3-region None AWS region
s3-access-key None Uses boto3 credential chain if omitted. Use $ENV_VAR substitution — never hardcode credentials.
s3-secret-key None Uses boto3 credential chain if omitted. Use $ENV_VAR substitution — never hardcode credentials.
s3-use-ssl true Whether to use SSL for S3 connections
s3-addressing-style auto S3 addressing style: path, virtual, or auto
s3-sse-customer-key None Base64-encoded 256-bit key for SSE-C encryption. Requires SSL.
cache-dir (required) Local cache directory path
cache-size 1GB Maximum local cache size

How It Works

zodb-s3blobs uses the same proxy/wrapper pattern as ZODB's built-in BlobStorage. It wraps any base storage via __getattr__ and explicitly overrides all blob methods so they always take precedence.

Two-Phase Commit Flow

  1. storeBlob: Object data (pickle) is stored in the base storage. The blob file is staged locally.
  2. tpc_vote: Staged blobs are uploaded to S3. If any upload fails, the transaction aborts cleanly.
  3. tpc_finish: No S3 operations (this method must not fail per ZODB contract). Staged files are moved into the local cache.
  4. tpc_abort: Uploaded S3 objects are deleted (best-effort). Local staged files are cleaned up.

S3 Key Layout

blobs/{oid_hex}/{tid_hex}.blob

With a configured prefix: {prefix}/blobs/{oid_hex}/{tid_hex}.blob

Local Cache

The local filesystem cache provides fast reads after the first access. It uses LRU eviction with a background daemon thread that removes the oldest files (by access time) when the total size exceeds the configured maximum. The cache is required -- S3 latency makes direct access impractical for ZODB's synchronous access patterns.

Garbage Collection

During pack(), the base storage is packed first, then S3 is scanned for keys referencing OIDs that are no longer reachable. Orphaned keys are deleted. This also cleans up any objects left behind by failed abort operations.

S3 Bucket Security

Ensure your S3 bucket has appropriate access controls (Block Public Access enabled, restrictive bucket policy). The minimum IAM policy required by zodb-s3blobs:

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "s3:GetObject",
            "s3:PutObject",
            "s3:DeleteObject",
            "s3:ListBucket"
        ],
        "Resource": [
            "arn:aws:s3:::BUCKET_NAME",
            "arn:aws:s3:::BUCKET_NAME/*"
        ]
    }]
}

Encryption at Rest (SSE-C)

zodb-s3blobs supports SSE-C (Server-Side Encryption with Customer-Provided Keys). The S3 service encrypts/decrypts data using your key but never stores it. Works with AWS S3, Hetzner Object Storage, MinIO (with KES), and other S3-compatible services.

Warning — AWS SSE-C deprecation (April 2026): AWS will disable SSE-C by default on new S3 buckets starting April 2026. Existing buckets are unaffected. For new buckets, you must explicitly enable SSE-C in the bucket policy, or consider migrating to SSE-KMS. If you receive 403 errors with SSE-C configured, this is the likely cause. See the AWS announcement for details.

Generate a 256-bit key:

python -c "import base64, os; print(base64.b64encode(os.urandom(32)).decode())"

Configure via environment variable:

s3-sse-customer-key $S3_SSE_KEY

Important: If you lose the key, encrypted data is irrecoverable. SSL is required (enforced at startup).

Security note: The SSE-C key is held in process memory for the lifetime of the storage instance. In long-running servers, consider using IAM-based encryption (SSE-KMS) instead if memory exposure is a concern. Python's string handling makes secure memory clearing impractical.

Using with MinIO (dev setup)

Warning: The credentials below are MinIO defaults for local development only. Never use default credentials in production.

# docker-compose.yml
services:
  minio:
    image: minio/minio
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin

Create the bucket:

mc alias set local http://localhost:9000 minioadmin minioadmin
mc mb local/zodb-blobs

Development

git clone https://github.com/bluedynamics/zodb-s3blobs.git
cd zodb-s3blobs
uv venv
uv pip install -e ".[test]"
pytest

For reproducible deployments (production), pin dependencies with a lockfile:

uv pip compile pyproject.toml -o requirements.txt
uv pip install -r requirements.txt

Source Code and Contributions

The source code is managed in a Git repository, with its main branches hosted on GitHub. Issues can be reported there too.

We'd be happy to see many forks and pull requests to make this package even better. We welcome AI-assisted contributions, but expect every contributor to fully understand and be able to explain the code they submit. Please don't send bulk auto-generated pull requests.

Maintainers are Jens Klein and the BlueDynamics Alliance developer team. We appreciate any contribution and if a release on PyPI is needed, please just contact one of us. We also offer commercial support if any training, coaching, integration or adaptations are needed.

License

ZPL-2.1

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zodb_s3blobs-1.0.5.tar.gz (34.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zodb_s3blobs-1.0.5-py3-none-any.whl (13.6 kB view details)

Uploaded Python 3

File details

Details for the file zodb_s3blobs-1.0.5.tar.gz.

File metadata

  • Download URL: zodb_s3blobs-1.0.5.tar.gz
  • Upload date:
  • Size: 34.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zodb_s3blobs-1.0.5.tar.gz
Algorithm Hash digest
SHA256 fd0d48a4fd505e0b58a2940ede9acea557c4a8c3ab0ca3cd1aa307492edbb8c1
MD5 a4c37a6079e740f8891e6c798bd43d83
BLAKE2b-256 dc2cb711ea19a9d1400c32663797158bca4c7de476699826c07e9edc9109df92

See more details on using hashes here.

Provenance

The following attestation bundles were made for zodb_s3blobs-1.0.5.tar.gz:

Publisher: release.yaml on bluedynamics/zodb-s3blobs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zodb_s3blobs-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: zodb_s3blobs-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 13.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zodb_s3blobs-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 5f338512fd0c2b3cb7fcf0536eb50ecae41abd7df096d835e9efccd689f10167
MD5 8eb7f71e32a6344169d06dcd4a69585b
BLAKE2b-256 a73e9247fee9da7b323d2ce8d14cb6adf659bc6fa84f59ae804338f45314fa49

See more details on using hashes here.

Provenance

The following attestation bundles were made for zodb_s3blobs-1.0.5-py3-none-any.whl:

Publisher: release.yaml on bluedynamics/zodb-s3blobs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page