Skip to main content

Synthetic Data Generation for tabular, relational and time series data.

Project description

DAI-Lab An Open Source Project from the Data to AI Lab, at MIT

Development Status PyPi Shield Tests Coverage Status Downloads Binder Slack

Overview

The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset.

Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent software systems without the risk of exposure that comes with data disclosure.

Underneath the hood it uses several probabilistic graphical modeling and deep learning based techniques. To enable a variety of data storage structures, we employ unique hierarchical generative modeling and recursive sampling techniques.

Current functionality and features:

Try it out now!

If you want to quickly discover SDV, simply click the button below and follow the tutorials!

Binder

Join our Slack Workspace

If you want to be part of the SDV community to receive announcements of the latest releases, ask questions, suggest new features or participate in the development meetings, please join our Slack Workspace!

Slack

Install

Using pip:

pip install sdv

Using conda:

conda install -c sdv-dev -c pytorch -c conda-forge sdv

For more installation options please visit the SDV installation Guide

Quickstart

In this short tutorial we will guide you through a series of steps that will help you getting started using SDV.

1. Model the dataset using SDV

To model a multi table, relational dataset, we follow two steps. In the first step, we will load the data and configures the meta data. In the second step, we will use the sdv API to fit and save a hierarchical model. We will cover these two steps in this section using an example dataset.

Step 1: Load example data

SDV comes with a toy dataset to play with, which can be loaded using the sdv.load_demo function:

from sdv import load_demo

metadata, tables = load_demo(metadata=True)

This will return two objects:

  1. A Metadata object with all the information that SDV needs to know about the dataset.

For more details about how to build the Metadata for your own dataset, please refer to the Working with Metadata tutorial.

  1. A dictionary containing three pandas.DataFrames with the tables described in the metadata object.

The returned objects contain the following information:

{
    'users':
            user_id country gender  age
          0        0     USA      M   34
          1        1      UK      F   23
          2        2      ES   None   44
          3        3      UK      M   22
          4        4     USA      F   54
          5        5      DE      M   57
          6        6      BG      F   45
          7        7      ES   None   41
          8        8      FR      F   23
          9        9      UK   None   30,
  'sessions':
          session_id  user_id  device       os
          0           0        0  mobile  android
          1           1        1  tablet      ios
          2           2        1  tablet  android
          3           3        2  mobile  android
          4           4        4  mobile      ios
          5           5        5  mobile  android
          6           6        6  mobile      ios
          7           7        6  tablet      ios
          8           8        6  mobile      ios
          9           9        8  tablet      ios,
  'transactions':
          transaction_id  session_id           timestamp  amount  approved
          0               0           0 2019-01-01 12:34:32   100.0      True
          1               1           0 2019-01-01 12:42:21    55.3      True
          2               2           1 2019-01-07 17:23:11    79.5      True
          3               3           3 2019-01-10 11:08:57   112.1     False
          4               4           5 2019-01-10 21:54:08   110.0     False
          5               5           5 2019-01-11 11:21:20    76.3      True
          6               6           7 2019-01-22 14:44:10    89.5      True
          7               7           8 2019-01-23 10:14:09   132.1     False
          8               8           9 2019-01-27 16:09:17    68.0      True
          9               9           9 2019-01-29 12:10:48    99.9      True
}

2. Fit a model using the SDV API.

First, we build a hierarchical statistical model of the data using SDV. For this we will create an instance of the sdv.SDV class and use its fit method.

During this process, SDV will traverse across all the tables in your dataset following the primary key-foreign key relationships and learn the probability distributions of the values in the columns.

from sdv import SDV

sdv = SDV()
sdv.fit(metadata, tables)

Once the modeling has finished, you can save your fitted SDV instance for later usage using the save method of your instance.

sdv.save('sdv.pkl')

The generated pkl file will not include any of the original data in it, so it can be safely sent to where the synthetic data will be generated without any privacy concerns.

2. Sample data from the fitted model

In order to sample data from the fitted model, we will first need to load it from its pkl file. Note that you can skip this step if you are running all the steps sequentially within the same python session.

sdv = SDV.load('sdv.pkl')

After loading the instance, we can sample synthetic data by calling its sample method.

samples = sdv.sample()

The output will be a dictionary with the same structure as the original tables dict, but filled with synthetic data instead of the real one.

Finally, if you want to evaluate how similar the sampled tables are to the real data, please have a look at our evaluation framework or visit the SDMetrics library.

Join our community

  1. If you would like to see more usage examples, please have a look at the tutorials folder of the repository. Please contact us if you have a usage example that you would want to share with the community.
  2. Please have a look at the Contributing Guide to see how you can contribute to the project.
  3. If you have any doubts, feature requests or detect an error, please open an issue on github or join our Slack Workspace
  4. Also, do not forget to check the project documentation site!

Citation

If you use SDV for your research, please consider citing the following paper:

Neha Patki, Roy Wedge, Kalyan Veeramachaneni. The Synthetic Data Vault. IEEE DSAA 2016.

@inproceedings{
    7796926,
    author={N. {Patki} and R. {Wedge} and K. {Veeramachaneni}},
    booktitle={2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)},
    title={The Synthetic Data Vault},
    year={2016},
    volume={},
    number={},
    pages={399-410},
    keywords={data analysis;relational databases;synthetic data vault;SDV;generative model;relational database;multivariate modelling;predictive model;data analysis;data science;Data models;Databases;Computational modeling;Predictive models;Hidden Markov models;Numerical models;Synthetic data generation;crowd sourcing;data science;predictive modeling},
    doi={10.1109/DSAA.2016.49},
    ISSN={},
    month={Oct}
}

Release Notes

0.12.0 - 2021-08-17

This release focuses on improving and expanding upon the existing constraints. More specifically, the users can now (1) specify multiple columns in Positive and Negative constraints, (2) use the new Uniqueconstraint and (3) use datetime data with the Between constraint. Additionaly, error messages have been added and updated to provide more useful feedback to the user.

Besides the added features, several bugs regarding the UniqueCombinations and ColumnFormula constraints have been fixed, and an error in the metadata.json for the student_placements dataset was corrected. The release also added documentation for the fit_columns_model which affects the majority of the available constraints.

New Features

  • Change default fit_columns_model to False - Issue #550 by @katxiao
  • Support multi-column specification for positive and negative constraint - Issue #545 by @sarahmish
  • Raise error when multiple constraints can't be enforced - Issue #541 by @amontanez24
  • Create Unique Constraint - Issue #532 by @amontanez24
  • Passing invalid conditions when using constraints produces unreadable errors - Issue #511 by @katxiao
  • Improve error message for ColumnFormula constraint when constraint column used in formula - Issue #508 by @katxiao
  • Add datetime functionality to Between constraint - Issue #504 by @katxiao

Bugs Fixed

  • UniqueCombinations constraint with handling_strategy = 'transform' yields synthetic data with nan values - Issue #521 by @katxiao and @csala
  • UniqueCombinations constraint outputting wrong data type - Issue #510 by @katxiao and @csala
  • UniqueCombinations constraint on only one column gets stuck in an infinite loop - Issue #509 by @katxiao
  • Conditioning on a non-constraint column using the ColumnFormula constraint - Issue #507 by @katxiao
  • Conditioning on the constraint column of the ColumnFormula constraint - Issue #506 by @katxiao
  • Update metadata.json for duration of student_placements dataset - Issue #503 by @amontanez24
  • Unit test for HMA1 when working with a single child row per parent row - Issue #497 by @pvk-developer
  • UniqueCombinations constraint for more than 2 columns - Issue #494 by @katxiao and @csala

Documentation Changes

  • Add explanation of fit_columns_model to API docs - Issue #517 by @katxiao

0.11.0 - 2021-07-12

This release primarily addresses bugs and feature requests related to using constraints for the single-table models. Users can now enforce scalar comparison with the existing GreaterThan constraint and apply 5 new constraints: OneHotEncoding, Positive, Negative, Between and Rounding. Additionally, the SDV will now auto-apply constraints for rounding numerical values, and for keeping the data within the observed bounds. All related user guides are updated with the new functionality.

New Features

  • Add OneHotEncoding Constraint - Issue #303 by @fealho
  • GreaterThan Constraint should apply to scalars - Issue #410 by @amontanez24
  • Improve GreaterThan constraint - Issue #368 by @amontanez24
  • Add Non-negative and Positive constraints across multiple columns- Issue #409 by @amontanez24
  • Add Between values constraint - Issue #367 by @fealho
  • Ensure values fall within the specified range - Issue #423 by @amontanez24
  • Add Rounding constraint - Issue #482 by @katxiao
  • Add rounding and min/max arguments that are passed down to the NumericalTransformer - Issue #491 by @amontanez24

Bugs Fixed

  • GreaterThan constraint between Date columns rasises TypeError - Issue #421 by @amontanez24
  • GreaterThan constraint's transform strategy fails on columns that are not float - Issue #448 by @amontanez24
  • AttributeError on UniqueCombinations constraint with non-strings - Issue #196 by @katxiao
  • Use reject sampling to sample missing columns for constraints - Issue #435 by @amontanez24

Documentation Changes

  • Ensure privacy metrics are available in the API docs - Issue #458 by @fealho
  • Ensure forumla constraint is called ColumnFormula everywhere in the docs - Issue #449 by @fealho

0.10.1 - 2021-06-10

This release changes the way we sample conditions to not only group by the conditions passed by the user, but also by the transformed conditions that result from them.

Issues resolved

  • Conditionally sampling on variable in constraint should have variety for other variables - Issue #440 by @amontanez24

0.10.0 - 2021-05-21

This release improves the constraint functionality by allowing constraints and conditions at the same time. Additional changes were made to update tutorials.

Issues resolved

  • Not able to use constraints and conditions in the same time - Issue #379 by @amontanez24
  • Update benchmarking user guide for reading private datasets - Issue #427 by @katxiao

0.9.1 - 2021-04-29

This release broadens the constraint functionality by allowing for the ColumnFormula constraint to take lambda functions and returned functions as an input for its formula.

It also improves conditional sampling by ensuring that any id fields generated by the model remain unique throughout the sampled data.

The CTGAN model was improved by adjusting a default parameter to be more mathematically correct.

Additional changes were made to improve tutorials as well as fix fragile tests.

Issues resolved

  • Tutorials test sometimes fails - Issue #355 by @fealho
  • Duplicate IDs when using reject-sampling - Issue #331 by @amontanez24 and @csala
  • discriminator_decay should be initialized at 1e-6 but it's 0 - Issue #401 by @fealho and @YoucefZemmouri
  • Tutorial typo - Issue #380 by @fealho
  • Request for sdv.constraint.ColumnFormula for a wider range of function - Issue #373 by @amontanez24 and @JetfiRex

0.9.0 - 2021-03-31

This release brings new privacy metrics to the evaluate framework which help to determine if the real data could be obtained or deduced from the synthetic samples. Additionally, now there is a normalized score for the metrics, which stays between 0 and 1.

There are improvements that reduce the usage of memory ram when sampling new data. Also there is a new parameter to control the reject sampling crash, graceful_reject_sampling, which if set to True and if it's not possible to generate all the requested rows, it will just issue a warning and return whatever it was able to generate.

The Metadata object can now be visualized using different combinations of names and details, which can be set to True or False in order to display only the table names with details or without. There is also an improvement on the validation, which now will display all the errors found at the end of the validation instead of only the first one.

This version also exposes all the hyperparameters of the models CTGAN and TVAE to allow a more advanced usage. There is also a fix for the TVAE model on small datasets and it's performance with NaN values has been improved. There is a fix for when using UniqueCombinationConstraint with the transform strategy.

Issues resolved

  • Memory Usage Gaussian Copula Trained Model consuming high memory when generating synthetic data - Issue #304 by @pvk-developer and @AnupamaGangadhar
  • Add option to visualize metadata with only table names - Issue #347 by @csala
  • Add sample parameter to control reject sampling crash - Issue #343 by @fealho
  • Verbose metadata validation - Issue #348 by @csala
  • Missing the introduction of custom specification for hyperparameters in the TVAE model - Issue #344 by @imkhoa99 and @pvk-developer

0.8.0 - 2021-02-24

This version adds conditional sampling for tabular models by combining a reject-sampling strategy with the native conditional sampling capabilities from the gaussian copulas.

It also introduces several upgrades on the HMA1 algorithm that improve data quality and robustness in the multi-table scenarios by making changes in how the parameters of the child tables are aggregated on the parent tables, including a complete rework of how the correlation matrices are modeled and rebuild after sampling.

Issues resolved

  • Fix probabilities contain NaN error - Issue #326 by @csala
  • Conditional Sampling for tabular models - Issue #316 by @fealho and @csala
  • HMA1: LinAlgError: SVD did not converge - Issue #240 by @csala

0.7.0 - 2021-01-27

This release introduces a few changes in the HMA1 relational algorithm to decrease modeling and sampling times, while also ensuring that correlations are properly kept across tables and also adding support for some relational schemas that were not supported before.

A few changes in constraints and tabular models also ensure that situations that produced errors before now work without errors.

Issues resolved

  • Fix unique key generation - Issue #306 by @fealho
  • Ensure tables that contain nothing but ids can be modeled - Issue #302 by @csala
  • Metadata visualization improvements - Issue #301 by @csala
  • Multi-parent re-model and re-sample issue - Issue #298 by @csala
  • Support datetimes in GreaterThan constraint - Issue #266 by @rollervan
  • Support for multiple foreign keys in one table - Issue #185 by @csala

0.6.1 - 2020-12-31

SDMetrics version is updated to include the new Time Series metrics, which have also been added to the API Reference and User Guides documentation. Additionally, a few code has been refactored to reduce external dependencies and a few minor bugs related to single table constraints have been fixed

Issues resolved

  • Add timeseries metrics and user guides - Issue #289 by @csala
  • Add functions to generate regex ids - Issue #288 by @csala
  • Saving a fitted tabular model with UniqueCombinations constraint raises PicklingError - Issue #286 by @csala
  • Constraints: handling_strategy='reject_sampling' causes 'ZeroDivisionError: division by zero' - Issue #285 by @csala

0.6.0 - 2020-12-22

This release updates to the latest CTGAN, RDT and SDMetrics libraries to introduce a new TVAE model, multiple new metrics for single table and multi table, and fixes issues in the re-creation of tabular models from a metadata dict.

Issues resolved

  • Upgrade to SDMetrics v0.1.0 and add sdv.metrics module - Issue #281 by @csala
  • Upgrade to CTGAN 0.3.0 and add TVAE model - Issue #278 by @fealho
  • Add dtype_transformers to Table.from_dict - Issue #276 by @csala
  • Fix Metadata from_dict behavior - Issue #275 by @csala

0.5.0 - 2020-11-25

This version updates the dependencies and makes a few internal changes in order to ensure that SDV works properly on Windows Systems, making this the first release to be officially supported on Windows.

Apart from this, some more internal changes have been made to solve a few minor issues from the older versions while also improving the processing speed when processing relational datasets with the default parameters.

API breaking changes

  • The distribution argument of the GaussianCopula has been renamed to field_distributions.
  • The HMA1 and SDV classes now use the categorical_fuzzy transformer by default instead of the one_hot_encoding one.

Issues resolved

  • GaussianCopula: rename distribution argument to field_distributions - Issue #237 by @csala
  • GaussianCopula: Improve error message if an invalid distribution name is passed - Issue #220 by csala
  • Import urllib.request explicitly - Issue #227 by @csala
  • TypeError: cannot astype a datetimelike from [datetime64[ns]] to [int32] - Issue #218 by @csala
  • Change default categorical transformer to categorical_fuzzy in HMA1 - Issue #214 by @csala
  • Integer categoricals being sampled as strings instead of integer values - Issue #194 by @csala

0.4.5 - 2020-10-17

In this version a new family of models for Synthetic Time Series Generation is introduced under the sdv.timeseries sub-package. The new family of models now includes a new class called PAR, which implements a Probabilistic AutoRegressive model.

This version also adds support for composite primary keys and regex based generation of id fields in tabular models and drops Python 3.5 support.

Issues resolved

  • Drop python 3.5 support - Issue #204 by @csala
  • Support composite primary keys in tabular models - Issue #207 by @csala
  • Add the option to generate string id fields based on regex on tabular models - Issue #208 by @csala
  • Synthetic Time Series - Issue #142 by @csala

0.4.4 - 2020-10-06

This version adds a new tabular model based on combining the CTGAN model with the reversible transformation applied in the GaussianCopula model that converts random variables with arbitrary distributions to new random variables with standard normal distribution.

The reversible transformation is handled by the GaussianCopulaTransformer recently added to RDT.

Issues resolved

0.4.3 - 2020-09-28

This release moves the models and algorithms related to generation of synthetic relational data to a new sdv.relational subpackage (Issue #198)

As part of the change, also the old sdv.models have been removed and now relational model is based on the recently introduced sdv.tabular models.

0.4.2 - 2020-09-19

In this release the sdv.evaluation module has been reworked to include 4 different metrics and in all cases return a normalized score between 0 and 1.

Included metrics are:

  • cstest
  • kstest
  • logistic_detection
  • svc_detection

0.4.1 - 2020-09-07

This release fixes a couple of minor issues and introduces an important rework of the User Guides section of the documentation.

Issues fixed

  • Error Message: "make sure the Graphviz executables are on your systems' PATH" - Issue #182 by @csala
  • Anonymization mappings leak - Issue #187 by @csala

0.4.0 - 2020-08-08

In this release SDV gets new documentation, new tutorials, improvements to the Tabular API and broader python and dependency support.

Complete list of changes:

  • New Documentation site based on the pydata-sphinx-theme.
  • New User Guides and Notebook tutorials.
  • New Developer Guides section within the docs with details about the SDV architecture, the ecosystem libraries and how to extend and contribute to the project.
  • Improved API for the Tabular models with focus on ease of use.
  • Support for Python 3.8 and the newest versions of pandas, scipy and scikit-learn.
  • New Slack Workspace for development discussions and community support.

0.3.6 - 2020-07-23

This release introduces a new concept of Constraints, which allow the user to define special relationships between columns that will not be handled via modeling.

This is done via a new sdv.constraints subpackage which defines some well-known pre-defined constraints, as well as a generic framework that allows the user to customize the constraints to their needs as much as necessary.

New Features

0.3.5 - 2020-07-09

This release introduces a new subpackage sdv.tabular with models designed specifically for single table modeling, while still providing all the usual conveniences from SDV, such as:

  • Seamless multi-type support
  • Missing data handling
  • PII anonymization

Currently implemented models are:

  • GaussianCopula: Multivariate distributions modeled using copula functions. This is stronger version, with more marginal distributions and options, than the one used to model multi-table datasets.
  • CTGAN: GAN-based data synthesizer that can generate synthetic tabular data with high fidelity.

0.3.4 - 2020-07-04

New Features

  • Support for Multiple Parents - Issue #162 by @csala
  • Sample by default the same number of rows as in the original table - Issue #163 by @csala

General Improvements

0.3.3 - 2020-06-26

General Improvements

  • Use SDMetrics for evaluation - Issue #159 by @csala

0.3.2 - 2020-02-03

General Improvements

  • Improve metadata visualization - Issue #151 by @csala @JDTheRipperPC

0.3.1 - 2020-01-22

New Features

  • Add Metadata Validation - Issue #134 by @csala @JDTheRipperPC

  • Add Metadata Visualization - Issue #135 by @JDTheRipperPC

General Improvements

  • Add path to metadata JSON - Issue #143 by @JDTheRipperPC

  • Use new Copulas and RDT versions - Issue #147 by @csala @JDTheRipperPC

0.3.0 - 2019-12-23

New Features

  • Create sdv.models subpackage - Issue #141 by @JDTheRipperPC

0.2.2 - 2019-12-10

New Features

  • Adapt evaluation to the different data types - Issue #128 by @csala @JDTheRipperPC

  • Extend load_demo functionality to load other datasets - Issue #136 by @JDTheRipperPC

0.2.1 - 2019-11-25

New Features

  • Methods to generate Metadata from DataFrames - Issue #126 by @csala @JDTheRipperPC

0.2.0 - 2019-10-11

New Features

  • compatibility with rdt issue 72 - Issue #120 by @csala @JDTheRipperPC

General Improvements

  • Error docstring sampler.__fill_text_columns - Issue #144 by @JDTheRipperPC
  • Reach 90% coverage - Issue #112 by @JDTheRipperPC
  • Review unittests - Issue #111 by @JDTheRipperPC

Bugs Fixed

  • Time required for sample_all function? - Issue #118 by @csala @JDTheRipperPC

0.1.2 - 2019-09-18

New Features

  • Add option to model the amount of child rows - Issue 93 by @ManuelAlvarezC

General Improvements

  • Add Evaluation Metrics - Issue 52 by @ManuelAlvarezC

  • Ensure unicity on primary keys on different calls - Issue 63 by @ManuelAlvarezC

Bugs fixed

  • executing readme: 'not supported between instances of 'int' and 'NoneType' - Issue 104 by @csala

0.1.1 - Anonymization of data

  • Add warnings when trying to model an unsupported dataset structure. GH#73
  • Add option to anonymize data. GH#51
  • Add support for modeling data with different distributions, when using GaussianMultivariate model. GH#68
  • Add support for VineCopulas as a model. GH#71
  • Improve GaussianMultivariate parameter sampling, avoiding warnings and unvalid parameters. GH#58
  • Fix issue that caused that sampled categorical values sometimes got numerical values mixed. GH#81
  • Improve the validation of extensions. GH#69
  • Update examples. GH#61
  • Replaced Table class with a NamedTuple. GH#92
  • Fix inconsistent dependencies and add upper bound to dependencies. GH#96
  • Fix error when merging extension in Modeler.CPA when running examples. GH#86

0.1.0 - First Release

  • First release on PyPI.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sdv-0.12.0.tar.gz (301.2 kB view details)

Uploaded Source

Built Distribution

sdv-0.12.0-py2.py3-none-any.whl (89.0 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file sdv-0.12.0.tar.gz.

File metadata

  • Download URL: sdv-0.12.0.tar.gz
  • Upload date:
  • Size: 301.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.1 CPython/3.8.11

File hashes

Hashes for sdv-0.12.0.tar.gz
Algorithm Hash digest
SHA256 bae61a347d58d64350171cb66c14f51bb50ad6ac7e15ba94707c51901e2ea1cd
MD5 9a412c6ce2d1d226a8d90f2dc89d5cc4
BLAKE2b-256 de5f10ea17b97ce0da4aa12415e37a2c390888ba91ca95f0eb78b13346e13a94

See more details on using hashes here.

File details

Details for the file sdv-0.12.0-py2.py3-none-any.whl.

File metadata

  • Download URL: sdv-0.12.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 89.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.1 CPython/3.8.11

File hashes

Hashes for sdv-0.12.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 ed175879bd07e079ee60c7b8c5f9a766fc5684204c31ab3e9591a220ee1742de
MD5 96618fccac0bb8dcb69770322465ca30
BLAKE2b-256 0ccfe4a882f451b657dbdddf7a5807cbaa39b12feb7729645cf74e555584278b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page