Skip to main content

Dataset Grouper - A library for datasets with group-level structure.

Project description

Dataset Grouper - Scalable Dataset Pipelines for Group-Structured Learning

PyPI version

Dataset Grouper is a library for creating, writing, and iterating over datasets with group-level structure. It is primarily intended for creating large-scale datasets for federated learning research.

Installation

We recommend installing via PyPI. Please check the PyPI page for up-to-date version requirements, including python version requirements.

pip install --upgrade pip
pip install dataset-grouper

Getting Started

Below is a simple starting example, that partitions MNIST across 10 clients by label.

First we import some necessary packages, including Apache Beam, which will be used to run the Dataset Grouper's pipelines.

import apache_beam as beam
import dataset_grouper as dsgp
import tensorflow_datasets as tfds

Next, we download and prepare the MNIST dataset.

dataset_builder = tfds.builder('mnist')
dataset_builder.download_and_prepare(...)

We now write a function that assigns each MNIST example a client identifier (generally a bytes object). In this case, we will partition examples according to their label, but you can use much more interesting partition functions as well.

def label_partition(x):
  label = x['label'].numpy()
  return str(label).encode('utf-8')

Finally, we build a pipeline that will partition MNIST according to this function, and run it using Beam's Direct Runner.

mnist_pipeline = dsgp.tfds_to_tfrecords(
    dataset_builder=dataset_builder,
    split='test',
    get_key_fn=label_partition,
    file_path_prefix=...
)
with beam.Pipeline() as root:
  mnist_pipeline(root)

This will save a version of MNIST that has been partitioned according to labels to a TFRecord format. We can also load it to iterate over client datasets.

partitioned_dataset = dsgp.PartitionedDataset(
  file_pattern=...,
  tfds_features='mnist')

for group_dataset in partitioned_dataset.build_group_stream():
  pass

Generally, PartitionedDataset.build_group_stream() is a tf.data.Dataset that yields datasets, each of which is contains all the examples held by one group. If you'd like to use these datasets with NumPy, you can simply do:

group_dataset_numpy = group_dataset.as_numpy_iterator()

What Else?

The example above is primarily for educational purposes. MNIST is a relatively small dataset, and can generally fit entirely into memory. For more interesting examples, check out the examples folder.

Dataset Grouper is intended more for large-scale datasets, especially those datasets that do not fit into memory. For these datasets, we recommend using more sophisticated Beam runners, in order to partition the data in a distributed fashion.

Disclaimers

This is not an officially supported Google product.

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're interested in learning more about responsible AI practices, please see Google AI's Responsible AI Practices.

Dataset Grouper is Apache 2.0 licensed. See the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

dataset_grouper-0.3.0-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file dataset_grouper-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for dataset_grouper-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 eb6fede98872cef9ed08364039f18ee5f2afd9cca1e68615cf75b2c723394f10
MD5 f4fe5963bfa376859d1cdbad606f8ac0
BLAKE2b-256 55fddd4d64a3f81d903a2d8cf6c21603ba30cb4d7f77d2ea32669fc28be086e6

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page