Skip to main content

Fluent dataset operations, compatible with your favorite libraries

Project description


Dataset Ops: Fluent dataset operations, compatible with your favorite libraries

Python package Documentation Status codecov Code style: black

Dataset Ops provides a fluent interface for loading, filtering, transforming, splitting, and combining datasets. Designed specifically with data science and machine learning applications in mind, it integrates seamlessly with Tensorflow and PyTorch.

Appetizer

import datasetops as do

# prepare your data
train, val, test = (
    do.from_folder_class_data('path/to/data/folder')
    .named("data", "label")
    .image_resize((240, 240))
    .one_hot("label")
    .shuffle(seed=42)
    .split([0.6, 0.2, 0.2])
)

# use with your favorite framework
train_tf = train.to_tensorflow() 
train_pt = train.to_pytorch() 

# or do your own thing
for img, label in train:
    ...

Installation

Baniry installers available at the Python package index

pip install datasetops

Why?

Collecting and preprocessing datasets is tiresome and often takes upwards of 50% of the effort spent in the data science and machine learning lifecycle. While Tensorflow and PyTorch have some useful datasets utilities available, they are designed specifically with the respective frameworks in mind. Unsuprisingly, this makes it hard to switch between them, and training-ready dataset definitions are bound to one or the other. Moreover, they do not aid you in standard scenarios where you want to:

  • Sample your dataset non-random ways (e.g with a fixed number of samples per class)
  • Center, standardize, normalise you data
  • Combine multiple datasets, e.g. for parallel input to a multi-stream network
  • Create non-standard data splits

Dataset Ops aims to make these processing steps easier, faster, and more intuitive to perform, while retaining full compatibility to and from the leading libraries. This also means you can grab a dataset from torchvision datasets and use it directly with tensorflow:

import do
import torchvision

torch_usps = torchvision.datasets.USPS('../dataset/path', download=True)
tensorflow_usps = do.from_pytorch(torch_usps).to_tensorflow()

Development Status

The library is still under heavy development and the API may be subject to change.

What follows here is a list of implemented and planned features.

Loaders

  • Loader (utility class used to define a dataset)
  • from_pytorch (load from a torch.utils.data.Dataset)
  • from_tensorflow (load from a tf.data.Dataset)
  • from_folder_data (load flat folder with data)
  • from_folder_class_data (load nested folder with a folder for each class)
  • from_folder_dataset_class_data (load nested folder with multiple datasets, each with a nested class folder structure )
  • from_mat (load contents of a .mat file as a single dataaset)
  • from_mat_single_mult_data (load contents of a .mat file as multiple dataasets)
  • load (load data from a path, automatically inferring type and structure)

Converters

  • to_tensorflow (convert Dataset into tensorflow.data.Dataset)
  • to_pytorch (convert Dataset into torchvision.Dataset)

Dataset information

  • shape (get shape of a dataset item)
  • counts (compute the counts of each unique item in the dataset by key)
  • unique (get a list of unique items in the dataset by key)
  • named (supply names for the item elements)
  • names (get a list of names for the elements in an item)
  • stats (provide an overview of the dataset statistics)
  • origin (provide an description of how the dataset was made)

Sampling and splitting

  • shuffle (shuffle the items in a dataset randomly)
  • sample (sample data at random a dataset)
  • filter (filter the dataset using a predicate)
  • split (split a dataset randomly based on fractions)
  • split_filter (split a dataset into two based on a predicate)
  • allow_unique (handy predicate used for balanced classwise filtering/sampling)
  • take (take the first items in dataset)
  • repeat (repeat the items in a dataset, either itemwise or as a whole)

Item manipulation

  • reorder (reorder the elements of the dataset items (e.g. flip label and data order))
  • transform (transform function which takes other functions and applies them to the dataset items.)
  • categorical (transforms an element into a categorical integer encoded label)
  • one_hot (transforms an element into a one-hot encoded label)
  • numpy (transforms an element into a numpy.ndarray)
  • reshape (reshapes numpy.ndarray elements)
  • image (transforms a numpy array or path string into a PIL.Image.Image)
  • image_resize (resizes PIL.Image.Image elements)
  • image_crop (crops PIL.Image.Image elements)
  • image_rotate (rotates PIL.Image.Image elements)
  • image_transform (transforms PIL.Image.Image elements)
  • image_brightness (modify brightness of PIL.Image.Image elements)
  • image_contrast (modify contrast of PIL.Image.Image elements)
  • image_filter (apply an image filter to PIL.Image.Image elements)
  • noise (adds noise to the data)
  • center (modify each item according to dataset statistics)
  • normalize (modify each item according to dataset statistics)
  • standardize (modify each item according to dataset statistics)
  • whiten (modify each item according to dataset statistics)
  • randomly (apply data transformations with some probability)

Dataset combinations

  • concat (concatenate two datasets, placing the items of one after the other)
  • zip (zip datasets itemwise, extending the size of each item)
  • cartesian_product (create a dataset whose items are all combinations of items (zipped) of the originating datasets)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datasetops-0.0.6.tar.gz (19.1 kB view details)

Uploaded Source

Built Distribution

datasetops-0.0.6-py3-none-any.whl (19.0 kB view details)

Uploaded Python 3

File details

Details for the file datasetops-0.0.6.tar.gz.

File metadata

  • Download URL: datasetops-0.0.6.tar.gz
  • Upload date:
  • Size: 19.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.6

File hashes

Hashes for datasetops-0.0.6.tar.gz
Algorithm Hash digest
SHA256 52b7342e91d4ac5377b66bc0ea4b6a57c238717fb968bd8c26791b6ac2da3d09
MD5 a4e0b3107a9b374de9ec201e739441a5
BLAKE2b-256 bf2054843d13af52542887031ee2f6a31180b678028618a29cea17f0ef091275

See more details on using hashes here.

File details

Details for the file datasetops-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: datasetops-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 19.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.6

File hashes

Hashes for datasetops-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 231bde372d0749c56500cf27fc76672d254fb7fa9f2c2c4a0a0619a8901dc623
MD5 3a2e31c8002adc1b3d9f697d92bb6206
BLAKE2b-256 0553a7077a8f054e7c05dfcdc431e53d7335b6d348df88fccd674d0ff8c4ffc0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page