Skip to main content

S3 file storage support for Invenio.

Project description

oarepo-s3

image image image image

This package built on top of the invenio-s3 library offers integration with any AWS S3 REST API compatible object storage backend. In addition to the invenio-s3, it tries to minimize processing of file requests on the Invenio server side and uses direct access to S3 storage backend as much as possible (neither multipart file uploads, nor downloads are processed by Invenio server itself).

Instalation

To start using this library

  1. install the following packages in your project's venv:

    git clone https://github.com/CESNET/s3-client
    cd s3-client
    poetry install
    pip install oarepo-s3
    
  2. Create an S3 account and bucket on your S3 storage provider of choice.

  3. Put the S3 access configuration into your Invenio server config (e.g. invenio.cfg):

    INVENIO_S3_TENANT=None
    INVENIO_S3_ENDPOINT_URL='https://s3.example.org'
    INVENIO_S3_ACCESS_KEY_ID='your_access_key'
    INVENIO_S3_SECRET_ACCESS_KEY='your_secret_key'
    
  4. Create Invenio files location targetting the S3 bucket

    invenio files location --default 'default-s3' s3://oarepo-bucket
    

Usage

To use this library as an Invenio Files storage in your projects, put the following into your Invenio server config:

FILES_REST_STORAGE_FACTORY = 'oarepo_s3.storage.s3_storage_factory'

This storage overrides the save() method from the InvenioS3 storage and adds the possibility for direct S3 multipart uploads. Every other functionality is handled by underlying InvenioS3 storage library.

Direct multipart upload

To create a direct multipart upload to S3 backend, one should provide an instance of MultipartUpload instead of a usual stream when assigning a file to a certain record, e.g.:

from oarepo_s3.api import MultipartUpload
files = record.files  # Record instance FilesIterator
mu = MultipartUpload(key='filename',
                     base_uri=files.bucket.location.uri,
                     expires=3600,
                     size=1024*1024*1000)  # total file size

# Assigning a MultipartUpload to the FilesIterator here will
# trigger the multipart upload creation on the S3 storage backend.
files['test'] = mu

this will configure the passed in MultipartUpload instance with all the information needed by any uploader client to process and complete the upload. The multipart upload session configuration can be found under the MultipartUpload.session field.

To be able to complete or abort an ongoing multipart upload, after an uploader client finishes uploading all the parts to the S3 backend, one needs to register the provided resources from oarepo_s3.views in the app blueprints:

Create multipart upload

  • files/<key>/?multipart=True

Create presigned URL for part upload

  • files/<key>/<upload_id>/<part_num>/presigned

List uploaded parts for a given multipart upload

  • files/<key>/<upload_id>/parts

Complete a multipart upload

  • files/<key>/<upload_id>/complete

Abort an ongoing multipart upload

  • files/<key>/<upload_id>/abort

OARepo Records Draft integration

This library works best together with oarepo-records-draft library. When integrated into draft endpoints one doesn't need to manually register the completion resources to blueprints. Multipart upload creation is also handled automatically.

To setup a drafts integration, just run the following:

pip install oarepo-records-draft oarepo-s3

and configure draft endpoints according to the library's README. Doing so, will auto-register the following file API actions on the draft endpoints:

Create multipart upload

POST /draft/records/<pid>/files/?multipart=True
{
  "key": "filename.txt",
  "ctype": "text/plain",
  "size": 1024
}

Get presigned URL for part upload

GET /draft/records/<pid>/files/<key>/<upload_id>/<part_num>/presigned

List uploaded parts for a given multipart upload

GET /draft/records/<pid>/files/<key>/<upload_id>/parts

Complete multipart upload

POST /draft/records/<pid>/files/<key>/<upload_id>/complete
{
  "parts": [{"ETag": <uploaded_part_etag>, "PartNumber": <part_num>},...]
}

Abort multipart upload

DELETE /draft/records/<pid>/files/<key>/<upload_id>/abort

Tasks

This library provides a task that looks up the expired ongoing file uploads that could no longer be completed and removes them from the associated record's bucket, to use this task in your Celery cron schedule, configure it in your Invenio server config like this:

CELERY_BEAT_SCHEDULE = {
    'cleanup_expired_multipart_uploads': {
        'task': 'oarepo_s3.tasks.cleanup_expired_multipart_uploads',
        'schedule': timedelta(minutes=60),
    },
    ...
}

.. Copyright (C) 2020 CESNET oarepo-s3 is free software; you can redistribute it and/or modify it under the terms of the MIT License; see LICENSE file for more details.

Changes

Version 1.0.3 (released 2020-04-25)

  • Allow for dynamic part size for multipart uploads.
  • Adds new configuration variables to define default part size and maximum number of parts.

Version 1.0.2 (released 2020-02-17)

  • Fixes typos on configuration variables and cached properties.
  • Adds AWS region name and signature version to configuration.

Version 1.0.1 (released 2019-01-23)

  • New configuration variable for URL expiration.
  • Enhances file serving.
  • Unpins Boto3 library.
  • Fixes test suit configuration.

Version 1.0.0 (released 2018-09-19)

  • Initial public release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oarepo-s3-1.4.0.tar.gz (24.8 kB view hashes)

Uploaded Source

Built Distribution

oarepo_s3-1.4.0-py2.py3-none-any.whl (21.3 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page