Skip to main content

A Virtualitics S3 Utility Library with Local File System Mirror.

Project description

Virt-S3 🪣

A Virtualitics utility package to handle file I/O with Object Storage Systems like AWS S3 and Minio.

With versatility in mind, virt-s3 was designed to be a relatively lightweight package that can either used independently or in conjunction with the larger Virtualitics AI platform. The virt-s3 module includes two primary submodules s3 and fs that implement each API function of the virt-s3 module specific to the target system: either S3/S3-like systems or local file systems.

We hope that you can use it, break it, and even help us improve it!

Table of Contents

  1. Prerequisites
  2. Example Usage
  3. Architecture
  4. Getting Started
  5. Documentation

Prerequisites

  • Requires python>=3.11
  • Local File System features currently only support posix / pathing (Linux, Mac, etc.)
    • Support for Windows \ pathing [Coming Soon]

Example Usage

Creating a Bucket

import virt_s3

# ENV variable `Local_FS` = '1' or '0' (local file system or S3)
params = virt_s3.get_default_params()

# use context manager to manage session scope
with virt_s3.SessionManager(params=params) as session:
    virt_s3.create_bucket('test-bucket', params=params, client=session)

Uploading a File

import virt_s3
import pandas as pd

# ENV variable `Local_FS` = '1' or '0' (local file system or S3)
params = virt_s3.get_default_params()

# path to locally saved csv file
fpath = "/tmp/data.csv"

# use context manager to manage session scope
with virt_s3.SessionManager(params=params) as session:
    s3_key = f"fixture/data/data.csv"
    virt_s3.upload_data(fpath, s3_key, params=params, client=session)

Reading a File

import virt_s3
import pandas as pd

# ENV variable `Local_FS` = '1' or '0' (Local file system or S3)
params = virt_s3.get_default_params()

# use context manager to manage session scope
with virt_s3.SessionManager(params=params) as session:
    data = virt_s3.get_file(saved_key, bytes_io=True, params=params, client=session)
    df = pd.read_csv(data)

Architecture

virt-S3 can be run on a local machine or from within a docker container. Additionally, it includes a variety of ways to interact with Object Storage Systems like AWS S3 and Minio in different hosting environments along with support for local file system access on host machine within docker container.

This versatility along with its lightweight set of dependencies allows virt-s3 to be easily installed and used in various types of environments.

Getting Started

  1. Create a fresh virtual environment with python >= 3.11

  2. Install the necessary dependencies

Basic Install (No Extras)

$ pip install virt-s3

Install with Single Extra

$ pip install "virt-s3[s3]"

Install with Multiple Extras

$ pip install "virt-s3[s3,dataframe,image]"
  • The Following Extras are Available:

    • s3 = installs dependencies required to interact with object stores like Minio/S3 (primarily relying on boto3)
    • dataframe = installs dependencies required for using numpy, pandas, and pyarrow dataframe/parquet operations
    • image = installs dependencies required to utilize image operations (e.g. get file as an image)
  • e.g. If you want to use virt_s3, but can't install pandas or pyarrow in your restricted environment, then you can simply install virt_s3 without the dataframe extra dependencies. You won't be able to use virt_s3.extras.CSVFileValidator, virt_s3.extras.ParquetFileValidator, read_parquet_file_df, and write_parquet_file_df but these are also not necessarily core functions of the library (therefore extras).

  1. Make sure the following environment variables are set
## Local File System Environment Variables
LOCAL_FS_USER=<your username>
LOCAL_FS=0   # use the local fs mirror or s3/minio: 1 = True, 0 = False
LOCAL_FS_ROOT_DIR=</path/to/your/data/dir/>

## S3 Environment Variables
S3_URL=<your s3/minio url>  # e.g. http://mock-s3:9000 or http://localhost:9000
S3_DEFAULT_BUCKET=test-buck<your bucket name>
AWS_SECRET_ACCESS_KEY=<your aws secret access key>
AWS_ACCESS_KEY_ID=<your aws access key id>
AWS_REGION=<your aws region>  # e.g. us-east-1
  • Note: S3_URL can be replaced with a localhost url (e.g. http://localhost:9000) if not being run within a docker container
  1. Run the above example usage

Code Documentation

API Description
get_default_params() Function to get default parameters to use for all functions (default behavior is based off of ENV variables)
get_session_client() Function to get session client based on passed in S3Params or LocalFSParams
create_bucket() Function to create a Bucket to read and write from
get_file_chunked() Function to get a file using a chunking loop. This can be useful when trying to retrieve very large files
get_file() Function to retrieve specified file as either in-memory data object or store directly to file
get_image() Function to get image from Bucket
get_files_generator() Generator function to quickly loop through reading a list of keys or file paths
get_files_batch() Function to get list of file paths or key paths in batch
list_dirs() Function to list valid 'folders' within Bucket
get_valid_file_paths() Function to get list of valid file paths or keys within particular directory of Bucket
file_exists() Function to see if key or file path exists in Bucket
upload_data() Function to upload in-memory data (e.g. bytes, BytesIO), file path, or folder path to Bucket
delete_file() Function to delete a file from Bucket
delete_files_by_dir() Function to delete all files and subdirectories, etc. in a given folder within a Bucket
archive_zip_as_buffer() Function to create a zip archive from dictionary of expected archive filepaths and data bytes
archive_tar_as_buffer() Function to create a tar or tar.gz archive from dictionary of expected archive filepaths and data bytes
extract_archive_file() Function to extract zip, tar, or tar.gz file contents into Bucket
read_parquet_file_df() Convenience function to read parquet file as pandas DataFrame
write_parquet_file_df() Convenience function to write pandas DataFrame to parquet file

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

virt_s3-0.1.6.tar.gz (45.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

virt_s3-0.1.6-py3-none-any.whl (35.9 kB view details)

Uploaded Python 3

File details

Details for the file virt_s3-0.1.6.tar.gz.

File metadata

  • Download URL: virt_s3-0.1.6.tar.gz
  • Upload date:
  • Size: 45.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.4 CPython/3.11.12 Linux/6.12.79

File hashes

Hashes for virt_s3-0.1.6.tar.gz
Algorithm Hash digest
SHA256 0286ab2a659bbaffcc264ac749a4e12055c04b8f1d2940f218a58c5b54cd86ba
MD5 0bcf50c60ba77f335516ccd34c100204
BLAKE2b-256 a91424f9c69cf1669435f1181bfb3f25c0c9f0b550f04fd8e18a972db749702d

See more details on using hashes here.

File details

Details for the file virt_s3-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: virt_s3-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 35.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.4 CPython/3.11.12 Linux/6.12.79

File hashes

Hashes for virt_s3-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 1b8ffa0b485028523a8d7be6e798bf95a3154deb73ab23bb067f1a46c87e51e7
MD5 a875a9716ff70c69f058376a41b13a91
BLAKE2b-256 92deff0972f1c3fc2c2394f6cc1133b3ba4ba0e3fbf76c04b8a9e77a7708d11a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page