Skip to main content

Data-centric identification of in-distribution incongruous examples

Project description

Data-SUITE: Data-centric identification of in-distribution incongruous examples

arXiv License: MIT

image

This repository contains the implementation of Data-SUITE, a "Data-Centric AI" framework to identify in-distribution incongruous data examples.

DATA-SUITE leverages copula modeling, representation learning, and conformal prediction to build feature-wise confidence interval estimators based on a set of training instances. The copula modeling is optional, but allows a nice property of not needing access to real training data after the initial stages or to augment smaller datasets when needed.

These estimators can be used to evaluate the congruence of test instances with respect to the training set, to answer two practically useful questions:

(1) which test instances will be reliably predicted by a model trained with the training instances?

(2) can we identify incongruous regions of the feature space so that data owners understand the data's limitations or guide future data collection?

For more details, please read our ICML 2022 paper: 'Data-SUITE: Data-centric identification of in-distribution incongruous examples'.

Installation

  1. Clone the repository
  2. Create a new virtual environment with Python 3.7, 3.8 or 3.9. e.g:
    virtualenv ds_env
  1. Run the following command from the repository directory:
pip install -r requirements.txt
  1. Two libraries for benchmarks (alibi-detect and aix360) have conflicting requirements for tensorflow. This can be circumvented by running the below. If this doesn't resolve the issue manually install the two packages in requirements-no-deps.txt using pip.
pip install --no-deps -r requirements-no-deps.txt

NOTE: It is now also possible to also install this repo from source or pypi. This is done in the following ways.

  1. From inside the repo you can run:
pip install .

or from anywhere run

pip install data_suite

This installs the minimum number of packages to run data_suite.

  1. If you wish to run benchmarks, please install with the benchmarks extra with:
pip install data_suite[benchmarks]
  1. If you wish to run contribute and adhere to the coding style, please install with the contribute extra with:
pip install data_suite[contribute]

Getting started

image

We provide two tutorial notebooks to illustrate the usage of Data-SUITE, with an example on synthetic data.

These notebooks can be found in the /tutorial folder.

  1. tutorial_simple.ipynb
  • Provides a simple object-oriented (OO) interface to use Data-SUITE with simple fit & predict options.
  1. tutorial_detailed.ipynb
  • Provides a more detailed look at the inner workings of Data-SUITE.

Both tutorials achieve the same objective, to get started with Data-SUITE.

Data-SUITE with synthetic & real-data

We also provide code to run Data-SUITE on public datasets. This includes the synthetic data experiments, as well as, the publicly available real-world datasets.

A variety of jupyter notebooks are provided for this purpose. They are contained in the notebooks folder of the repo.

For ease of usage we have provided bash scripts to execute the notebooks via Papermill. The results for all the different experiments/analysis for the datasets are then stored in their specific /results folder. These include dataframes of metrics, figures, etc.

These bash scripts are contained in the /scripts folder.

  1. Synthetic data:

The synthetic data experiment uses Weights and Biases - wandb to log results over the various runs.

Your specific wandb credentials should be added to: notebooks/synthetic_pipeline.ipynb

Thereafter one can run from the main dir:

bash scripts/synthetic_pipeline.sh

Once the experiment has completed all results are logged to wandb. Note this run might take quite some time. One can then download the .csv of logged results from wandb and place it in the /artifacts folder as synthetic_artifacts.csv.

Since the experiment can take quite long, we have provided an artifact synthetic_artifacts.csv obtained from wandb.

synthetic_artifacts.csv can then be processed processed to obtain the desired metrics & plots:

For this run from the main dir:

bash scripts/process_synthetic.sh

All results will then be written to /results/synthetic

  1. Real data:

To run the public real-world datasets one simply needs to run, for example, from the main dir:

bash scripts/run_adult.sh

OR

bash scripts/run_electric.sh

All results from the different main paper & appendix experiments will be written to the /results folder. These include dataframes for tables of metrics, figures, etc.

The real world dataset notebooks can also serve as inspiration for usage on one's own data.

Citing

If you use this code, please cite the associated paper:

@inproceedings
{seedat2022data,
title={Data-SUITE: Data-centric identification of in-distribution incongruous examples},
author={Seedat, Nabeel and Crabbe, Jonathan and van der Schaar, Mihaela},
journal={arXiv preprint arXiv:2202.08836},
year={2022}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

data_suite-0.1.2-py3-none-macosx_10_14_x86_64.whl (30.2 kB view details)

Uploaded Python 3macOS 10.14+ x86-64

data_suite-0.1.2-py3-none-any.whl (30.5 kB view details)

Uploaded Python 3

File details

Details for the file data_suite-0.1.2-py3-none-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for data_suite-0.1.2-py3-none-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 e2aae1e0647fcf98edea1cfb86f00c58a891e934bab1ca76bcb78a0fb1cb4483
MD5 d3dafd987be85003f4340e2dc4dcf19e
BLAKE2b-256 9a30336138f1d50dbfcd167e30ee31f96ee855ca0ad8360c0a3bdc447de10bd9

See more details on using hashes here.

File details

Details for the file data_suite-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: data_suite-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 30.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.13

File hashes

Hashes for data_suite-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a6c76b9d9759df8fd2f07b4b8d55cbcfdfd810d7c0bce04428cc9c1f8389563b
MD5 c7fe57b9f724397bf48688cfdcca7ab2
BLAKE2b-256 d6c4ab97fbcd2ca075641b0ba7ce28e490fb159cc3b0ca0f6724c5eae1116d90

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page