S3 file storage support for Invenio.
Project description
oarepo-s3
This package built on top of the invenio-s3 library offers integration with any AWS S3 REST API compatible object storage backend. In addition to the invenio-s3, it tries to minimize processing of file requests on the Invenio server side and uses direct access to S3 storage backend as much as possible (neither multipart file uploads, nor downloads are processed by Invenio server itself).
Instalation
To start using this library
-
install the following packages in your project's venv:
git clone https://github.com/CESNET/s3-client cd s3-client poetry install pip install oarepo-s3
-
Create an S3 account and bucket on your S3 storage provider of choice.
-
Put the S3 access configuration into your Invenio server config (e.g.
invenio.cfg
):INVENIO_S3_TENANT=None INVENIO_S3_ENDPOINT_URL='https://s3.example.org' INVENIO_S3_ACCESS_KEY_ID='your_access_key' INVENIO_S3_SECRET_ACCESS_KEY='your_secret_key'
-
Create Invenio files location targetting the S3 bucket
invenio files location --default 'default-s3' s3://oarepo-bucket
Usage
To use this library as an Invenio Files storage in your projects, put the following into your Invenio server config:
FILES_REST_STORAGE_FACTORY = 'oarepo_s3.storage.s3_storage_factory'
This storage overrides the save()
method from the InvenioS3
storage and adds
the possibility for direct S3 multipart uploads. Every other functionality
is handled by underlying InvenioS3
storage library.
Direct multipart upload
To create a direct multipart upload to S3 backend, one should provide an
instance of MultipartUpload
instead of a usual stream
when assigning
a file to a certain record, e.g.:
from oarepo_s3.api import MultipartUpload
files = record.files # Record instance FilesIterator
mu = MultipartUpload(key='filename',
base_uri=files.bucket.location.uri,
expires=3600,
size=1024*1024*1000, # total file size
part_size=None,
# completion resources as registered in blueprints, see below
complete_url='/records/1/files/filename/multipart-complete',
abort_url='/records/1/files/filename/multipart-abort')
# Assigning a MultipartUpload to the FilesIterator here will
# trigger the multipart upload creation on the S3 storage backend.
files['test'] = mu
this will configure the passed in MultipartUpload
instance with
all the information needed by any uploader client to process and
complete the upload. The multipart upload session configuration
can be found under the MultipartUpload.session
field.
To be able to complete or abort an ongoing multipart upload, after an
uploader client finishes uploading all the parts to the S3 backend,
one needs to register the provided resources from oarepo_s3.views
in
the app blueprints:
def multipart_actions(code, files, rest_endpoint, extra, is_draft):
# rest path -> view
return {
'files/<key>/complete-multipart':
MultipartUploadCompleteResource.as_view(
MultipartUploadCompleteResource.view_name.format(endpoint=code)
),
'files/<key>/abort-multipart':
MultipartUploadAbortResource.as_view(
MultipartUploadAbortResource.view_name.format(endpoint=code)
)
}
OARepo Records Draft integration
This library works best together with oarepo-records-draft library. When integrated into draft endpoints one doesn't need to manually register the completion resources to blueprints. Multipart upload creation is also handled automatically.
To setup a drafts integration, just run the following:
pip install oarepo-records-draft oarepo-s3
and configure draft endpoints according to the library's README. Doing so, will auto-register the following file API actions on the draft endpoints:
Create multipart upload
POST /draft/records/<pid>/files/?multipart=True
{
"key": "filename.txt",
"multipart_content_type": "text/plain",
"size": 1024
}
Complete multipart upload
POST /draft/records/<pid>/files/<key>/complete-multipart
{
"parts": [{"ETag": <uploaded_part_etag>, PartNum: <part_num>},...]
}
Abort multipart upload
POST /draft/records/<pid>/files/<key>/abort-multipart
Tasks
This library provides a task that looks up the expired ongoing file uploads that could no longer be completed and removes them from the associated record's bucket, to use this task in your Celery cron schedule, configure it in your Invenio server config like this:
CELERY_BEAT_SCHEDULE = {
'cleanup_expired_multipart_uploads': {
'task': 'oarepo_s3.tasks.cleanup_expired_multipart_uploads',
'schedule': timedelta(minutes=60),
},
...
}
.. Copyright (C) 2020 CESNET oarepo-s3 is free software; you can redistribute it and/or modify it under the terms of the MIT License; see LICENSE file for more details.
Changes
Version 1.0.3 (released 2020-04-25)
- Allow for dynamic part size for multipart uploads.
- Adds new configuration variables to define default part size and maximum number of parts.
Version 1.0.2 (released 2020-02-17)
- Fixes typos on configuration variables and cached properties.
- Adds AWS region name and signature version to configuration.
Version 1.0.1 (released 2019-01-23)
- New configuration variable for URL expiration.
- Enhances file serving.
- Unpins Boto3 library.
- Fixes test suit configuration.
Version 1.0.0 (released 2018-09-19)
- Initial public release.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for oarepo_s3-1.2.5-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 07203f559ac9d4d9a4edd0c91f93ab6821844ce4cd4e19da020e9162904be796 |
|
MD5 | 4b12c0b0ad01b04e0e2ec6035c9a7855 |
|
BLAKE2b-256 | dd5bdae07b98cf76b8bad27e3a2e05283238cfa5659aade299c08dfde195e7c2 |