Skip to main content

Friendly dataset operations for your data science needs

Project description

Python package Documentation Status codecov

Dataset Ops

Friendly dataset operations for your data science needs

TL;DR

import datasetops as do

path = '../data/nested_class_folder'

# Prepare your data
train, val, test =                                       \
    do.load_folder_class_data(path)                      \
      .set_item_names('data','label')                    \  
      .as_img('data').resize((240,240)).as_numpy('data') \
      .one_hot('label')                                  \
      .shuffle(seed=42)                                  \
      .split([0.6,0.2,0.3])      

# Do your magic using Tensorflow
train_tf = trian.to_tf() 

# Rule the world with PyTorch
train_pt = trian.to_pytorch() #coming up!

# Do your own thing
for img, label in train:
    ...

Motivation

Collecting and preprocessing datasets is a tiresome and often underestimated part of the data science and machine learning lifecycle. While Tensorflow and PyTorch do have some useful datasets utilisites available, they are designed specifically with the respective frameworks in mind. Unsuprisingly, this makes it hard to switch between frameworks, and port training-ready dataset definitions.

Moreover, they do not aid you in standard scenarios where you want to:

  • subsample your dataset, e.g with a fixed number of samples per class
  • rescale, center, standardize, normalise you data
  • combine multiple datasets, e.g. for parallel input in a multi-stream network
  • create non-standard data splits

All of this is usually done by hand. Again and again and again...

Idea

In a nutshell, datasets for data science and machine learning are just a collection of samples that are often accompanied by a label.We should be able to read all these formats into a common representation, where most common operations can be performed.Subsequently, we should be able to transform these into the standard formats used in Tensorflow and PyTorch.

Implementation Status

The library is still under heavy development and the API may be subject to change.

What follows here is a list of implemented and planned features.

Loaders

  • load (load data from a path, automatically inferring type and structure)
  • load_folder_data (load flat folder with data)
  • load_folder_class_data (load nested folder with a folder for each class)
  • load_folder_dataset_class_data (load nested folder with multiple datasets, each with a nested class folder structure )
  • load_mat (load contents of a .mat file as a single dataaset)
  • load_mat_single_mult_data (load contents of a .mat file as multiple dataasets)
  • FunctionDataset (let users define a dataset)

Dataset information

  • shape (get shape of a dataset item)
  • counts (compute the counts of each unique item in the dataset by key)
  • unique (get a list of unique items in the dataset by key)
  • item_names (get a list of names for the elements in an item)
  • set_item_names (supply names for the item elements)
  • stats (provide an overview of the dataset statistics)
  • origin (provide an description of how the dataset was made)

Sampling and splitting

  • shuffle (shuffle the items in a dataset randomly)
  • sample (sample data at random a dataset)
  • split (split a dataset randomly based on fractions)
  • filter (filter the dataset using a predicate)
  • filter_split (split a dataset into two based on a predicate)
  • allow_unique (handy predicate used for balanced classwise filtering/sampling)
  • take (take the first items in dataset)
  • repeat (repeat the items in a dataset, either itemwise or as a whole)

Item manipulation

  • reorder (reorder the elements of the dataset items (e.g. flip label and data order))
  • transform (transform function which takes other functions and applies them to the dataset items.)
  • custom (function wrapper enabling user-defined function to be used as a transform)
  • label (transforms an element into a integer encoded categorical label)
  • one_hot (transforms an element into a one-hot encoded categorical label)
  • as_numpy (transforms an element into a numpy.ndarray)
  • reshape (reshapes numpy.ndarray elements)
  • as_image (transforms a numpy array or path string into a PIL.Image.Image)
  • img_resize (resizes PIL.Image.Image elements)
  • center (modify each item according to dataset statistics)
  • normalize (modify each item according to dataset statistics)
  • standardize (modify each item according to dataset statistics)
  • whiten (modify each item according to dataset statistics)

Dataset combinations

  • concat (concatenate two datasets, placing the items of one after the other)
  • zip (zip datasets itemwise, extending the size of each item)
  • cartesian_product (create a dataset whose items are all combinations of items (zipped) of the originating datasets)

Converters

  • to_tf (convert Dataset into tensorflow.data.Dataset)
  • to_pytroch (convert Dataset into torchvision.Dataset)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datasetops-0.0.3.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

datasetops-0.0.3-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file datasetops-0.0.3.tar.gz.

File metadata

  • Download URL: datasetops-0.0.3.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.6

File hashes

Hashes for datasetops-0.0.3.tar.gz
Algorithm Hash digest
SHA256 d7cc4f9ee34fe34737544de4b580135d4b75084024925de2e65e807ddf19f30e
MD5 83b5855f5872f418704692279d625af0
BLAKE2b-256 7bf58ce9463404e02cc67bfaaeb7f48f1242b57ff630bfdef88a34720a2b5c2e

See more details on using hashes here.

File details

Details for the file datasetops-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: datasetops-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 18.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0.post20200210 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.7.6

File hashes

Hashes for datasetops-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 8b5acdb9cdab7953e6a3c6f6d897153e7905b5dc2544473e756d583a6a8cb44f
MD5 fd0da81bfc485a858b67456f5739a80f
BLAKE2b-256 3012d06c5f17015314f1e3c4c74cf552f8bf5e17331e1f16f45d615172aae3bf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page