Synthetic Data Generation with optional Differential Privacy
Project description
Gretel Synthetics
An open source synthetic data library from Gretel.ai
Documentation
Try it out now!
If you want to quickly discover gretel-synthetics, simply click the button below and follow the tutorials!
Check out additional examples here.
Getting Started
By default, we do not install Tensorflow via pip as many developers and cloud services such as Google Colab are running customized versions for their hardware.
pip install -U .
or
pip install gretel-synthetics
then...
$ pip install jupyter
$ jupyter notebook
When the UI launches in your browser, navigate to examples/synthetic_records.ipynb
and get generating!
If you want to install gretel-synthetics
locally and use a GPU (recommended):
- Create a virtual environment (e.g. using
conda
)
$ conda create --name tf --python=3.8
- Activate the virtual environment
$ conda activate tf
- Run the setup script
./setup-utils/setup-gretel-synthetics-tensorflow24-with-gpu.sh
The last step will install all the necessary software packages for GPU usage, tensorflow=2.4
and gretel-synthetics
.
Note that this script works only for Ubuntu 18.04. You might need to modify it for other OS versions.
Overview
This package allows developers to quickly get immersed with synthetic data generation through the use of neural networks. The more complex pieces of working with libraries like Tensorflow and differential privacy are bundled into friendly Python classes and functions. There are two high level modes that can be utilized.
Simple Mode
The simple mode will train line-per-line on an input file of text. When generating data, the generator will yield a custom object that can be used a variety of different ways based on your use case. This notebook demonstrates this mode.
DataFrame Mode
This library supports CSV / DataFrames natively using the DataFrame "batch" mode. This module provided a wrapper around our simple mode that is geared for working with tabular data. Additionally, it is capabable of handling a high number of columns by breaking the input DataFrame up into "batches" of columns and training a model on each batch. This notebook shows an overview of using this library with DataFrames natively.
Components
There are four primary components to be aware of when using this library.
-
Configurations. Configurations are classes that are specific to an underlying ML engine used to train and generate data. An example would be using
TensorFlowConfig
to create all the necessary parameters to train a model based on TF.LocalConfig
is aliased toTensorFlowConfig
for backwards compatability with older versions of the library. A model is saved to a designated directory, which can optionally be archived and utilized later. -
Tokenizers. Tokenizers convert input text into integer based IDs that are used by the underlying ML engine. These tokenizers can be created and sent to the training input. This is optional, and if no specific tokenizer is specified then a default one will be used. You can find an example here that uses a simple char-by-char tokenizer to build a model from an input CSV. When training in a non-differentially private mode, we suggest using the default
SentencePiece
tokenizer, an unsupervised tokenizer that learns subword units (e.g., byte-pair-encoding (BPE) [Sennrich et al.]) and unigram language model [Kudo.]) for faster training and increased accuracy of the synthetic model. -
Training. Training a model combines the configuration and tokenizer and builds a model, which is stored in the designated directory, that can be used to generate new records.
-
Generation. Once a model is trained, any number of new lines or records can be generated. Optionally, a record validator can be provided to ensure that the generated data meets any constraints that are necessary. See our notebooks for examples on validators.
Differential Privacy
Differential privacy support for our TensorFlow mode is built on the great work being done by the Google TF team and their TensorFlow Privacy library.
When utilizing DP, we currently recommend using the character tokenizer as it will only create a vocabulary of single tokens and removes the risk of sensitive data being memorized as actual tokens that can be replayed during generation.
There are also a few configuration options that are notable such as:
predict_batch_size
should be set to 1dp
should be enabledlearning_rate
,dp_noise_multiplier
,dp_l2_norm_clip
, anddp_microbatches
can be adjusted to achieve various epsilon values.reset_states
should be disabled
Please see our example Notebook for training a DP model based on the Netflix Prize dataset.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for gretel-synthetics-0.15.6rc1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | fa6d0c45cb699144ea411079e84a34e6ddfc588a60e92ee8c161676eb912a056 |
|
MD5 | 981be99fbcdccfc5b615679cff76106b |
|
BLAKE2b-256 | 54f2927e404e17b5c5e4ba365396b7b6776df06f1f049a772f9321d7b9cb430c |
Hashes for gretel_synthetics-0.15.6rc1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c50010a53ec27075db1b3270887a3fc71c9b2834ccecfc57147aafb81863d734 |
|
MD5 | f5a25f5ff5c19ddaf80f3672d519d692 |
|
BLAKE2b-256 | 5f4ebfbc90230f982610ee43cfd2cba398d32c2c9e54689c00894ee58439d4e1 |