Python library for uploading (bulk) data to Dataverse
Project description
Dataverse Uploader
Python equivalent to the DVUploader written in Java. Complements other libraries written in Python and facilitates the upload of files to a Dataverse instance via Direct Upload.
Features
- Parallel direct upload to a Dataverse backend storage
- Files are streamed directly instead of being buffered in memory
- Supports multipart uploads and chunks data accordingly
https://github.com/gdcc/python-dvuploader/assets/30547301/671131b1-d188-4433-9f77-9ec0ed2af36e
Getting started
To get started with DVUploader, you can install it via PyPI
python3 -m pip install dvuploader
or by source
git clone https://github.com/gdcc/python-dvuploader.git
cd python-dvuploader
python3 -m pip install .
Quickstart
Programmatic usage
In order to perform a direct upload, you need to have a Dataverse instance running and a cloud storage provider. The following example shows how to upload files to a Dataverse instance. Simply provide the files of interest and utilize the upload
method of a DVUploader
instance.
import dvuploader as dv
# Add file individually
files = [
dv.File(filepath="./small.txt"),
dv.File(directory_label="some/dir", filepath="./medium.txt"),
dv.File(directory_label="some/dir", filepath="./big.txt"),
*dv.add_directory("./data"), # Add an entire directory
]
DV_URL = "https://demo.dataverse.org/"
API_TOKEN = "XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
PID = "doi:10.70122/XXX/XXXXX"
dvuploader = dv.DVUploader(files=files)
dvuploader.upload(
api_token=API_TOKEN,
dataverse_url=DV_URL,
persistent_id=PID,
n_parallel_uploads=2, # Whatever your instance can handle
)
Command Line Interface
DVUploader ships with a CLI ready to use outside scripts. In order to upload files to a Dataverse instance, simply provide the files of interest, persistent identifier and credentials.
Using arguments
dvuploader my_file.txt my_other_file.txt \
--pid doi:10.70122/XXX/XXXXX \
--api-token XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX \
--dataverse-url https://demo.dataverse.org/ \
Using a config file
Alternatively, you can also supply a config
file that contains all necessary informations for the uploader. The config
file is a JSON/YAML file that contains the following keys:
persistent_id
: Persistent identifier of the dataset to upload to.dataverse_url
: URL of the Dataverse instance.api_token
: API token of the Dataverse instance.files
: List of files to upload. Each file is a dictionary with the following keys:filepath
: Path to the file to upload.directory_label
: Optional directory label to upload the file to.description
: Optional description of the file.mimetype
: Mimetype of the file.categories
: Optional list of categories to assign to the file.restrict
: Boolean to indicate that this is a restricted file. Defaults to False.
In the following example, we upload three files to a Dataverse instance. The first file is uploaded to the root directory of the dataset, while the other two files are uploaded to the directory some/dir
.
# config.yml
persistent_id: doi:10.70122/XXX/XXXXX
dataverse_url: https://demo.dataverse.org/
api_token: XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
files:
- filepath: ./small.txt
- filepath: ./medium.txt
directory_label: some/dir
- filepath: ./big.txt
directory_label: some/dir
The config
file can then be used as follows:
dvuploader --config-path config.yml
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for dvuploader-0.2.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 65282cd54c764c64e1242d4d71811e3a20b386c2529d8f96c339e0a38a1dce16 |
|
MD5 | ff36d05cbb7d1c98b645c4752327c34d |
|
BLAKE2b-256 | 89138d042aedfd1f8cbdd01c3c295d4baf4b473d5da371347d6a2dd81524dd52 |