Skip to main content

Transparent Data Valuation

Project description

OpenDataVal: a Unified Benchmark for Data Valuation

Assessing the quality of individual data points is critical for improving model performance and mitigating biases. However, there is no way to systematically benchmark different algorithms.

OpenDataVal is an open-source initiative that with a diverse array of datasets/models (image, NLP, and tabular), data valuation algorithms, and evaluation tasks using just a few lines of code.

OpenDataVal also provides a leaderboards for data evaluation tasks. We've curated and added artificial noise to some datasets. Create your own DataEvaluator to top the leaderboards. OpenDataVal is accepted at NeurIPS 2023 track on Datasets and Benchmarks.

Overview
Paper Paper link
Python Python Version
Dependencies Pytorch scikit-learn numpy Code style: black
Documentation Github Pages
CI/CD Build Coverage
Issues Issues
License MIT License
Releases Releases
Citation Cite Us

:sparkles: Features


Feature Status Links Notes
Datasets Stable Docs Embeddings available for image/NLP datasets
Models Stable Docs Support available for sk-learn models
Data Evaluators Stable Docs
Experiments Stable Docs
Examples Stable
CLI Experimental opendataval --help No support for null values

(Back to top)

:hourglass_flowing_sand: Installation options

  1. Install with pip
    pip install opendataval
    
  2. Clone the repo and install
    git clone https://github.com/opendataval/opendataval.git
    make install
    
    a. Install optional dependencies if you're contributing
    make install-dev
    
    b. If you want to pull in kaggle datasets, I'd reccomend looking how to add a kaggle folder to the current directory. Tutorial here

(Back to top)

:zap: Quick Start


To set up an experiment on DataEvaluators. Feel free to change the source code as needed for a project.

import opendataval
from opendataval.experiment import ExperimentMediator
from opendataval.dataval import DataOob
from opendataval.experiment import discover_corrupted_sample, noisy_detection

exper_med = ExperimentMediator.model_factory_setup(
    dataset_name='iris',
    force_download=False,
    train_count=50,
    valid_count=50,
    test_count=50,
    model_name='ClassifierMLP',
    train_kwargs={'epochs': 5, 'batch_size': 20},
)
list_of_data_evaluators = [DataOob()]  # Define evaluators here
eval_med = exper_med.compute_data_values(list_of_data_evaluators)

# Runs a discover the noisy data experiment for each DataEvaluator and plots
data, fig = eval_med.plot(discover_corrupted_sample)

# Runs non-plottable experiment
data = eval_med.evaluate(noisy_detection)

:computer: CLI

opendataval comes with a quick CLI tool, The tool is under development and the template for a csv input is found at cli.csv. Note that for kwarg arguments, the input must be valid json.

To use run the following command if installed with make-install:

opendataval --file cli.csv -n [job_id] -o [path/to/output/]

To run without installing the script:

python opendataval --file cli.csv -n [job_id] -o [path/to/output/]

(Back to top)

:control_knobs: API

Here are the 4 interacting parts of opendataval

  1. DataFetcher, Loads data and holds meta data regarding splits
  2. Model, trainable prediction model.
  3. DataEvaluator, Measures the data values of input data point for a specified model.
  4. ExperimentMediator, facilitates experiments regarding data values across several DataEvaluators

(Back to top)

DataFetcher

The DataFetcher takes the name of a Register dataset and loads, transforms, splits, and adds noise to the data set.

from opendataval.dataloader import DataFetcher

DataFetcher.datasets_available()  # ['dataset_name1', 'dataset_name2']
fetcher = DataFetcher(dataset_name='dataset_name1')

fetcher = fetcher.split_dataset_by_count(70, 20, 10)
fetcher = fetcher.noisify(mix_labels, noise_rate=.1)

x_train, y_train, x_valid, y_valid, x_test, y_test = fetcher.datapoints

(Back to top)

Model

Model is the predictive model for Data Evaluators.

from opendataval.model import LogisticRegression

model = LogisticRegression(input_dim, output_dim)

model.fit(x, y)
model.predict(x)
>>> torch.Tensor(...)

(Back to top)

DataEvaluator

We have a catalog of DataEvaluator to run experiments. To do so, input the Model, DataFetcher, and an evaluation metric (such as accuracy).

from opendataval.dataval.ame import AME

dataval = (
    AME(num_models=8000)
    .train(fetcher=fetcher, pred_model=model, metric=metric)
)

data_values = dataval.data_values  # Cached values
data_values = dataval.evaluate_data_values()  # Recomputed values
>>> np.ndarray([.888, .132, ...])

(Back to top)

ExperimentMediator

ExperimentMediator is helps make a cohesive and controlled experiment. NOTE Warnings are raised if errors occur in a specific DataEvaluator.

expermed = ExperimentrMediator(fetcher, model, train_kwargs, metric_name).compute_data_values(data_evaluators)

Run experiments by passing in an experiment function: (DataEvaluator, DataFetcher, ...) - > dict[str, Any]. There are 5 found exper_methods.py with three being plotable.

df = expermed.evaluate(noisy_detection)
df, figure = expermed.plot(discover_corrupted_sample)

For more examples, please refer to the Documentation

(Back to top)

:medal_sports: opendataval Leaderboards

For datasets that start with the prefix challenge, we provide leaderboards. Compute the data values with an ExperimentMediator and use the save_dataval function to save a csv. Upload it to here! Uploading will allow us to systematically compare your DataEvaluator against others in the field.

The available challenges are currently:

  1. challenge-iris
exper_med = ExperimentMediator.model_factory_setup(
    dataset_name='challenge-...', model_name=model_name, train_kwargs={...}, metric_name=metric_name
)
exper_med.compute_data_values([custom_data_evaluator]).evaluate(save_dataval, save_output=True)

(Back to top)

:wave: Contributing

If you have a quick suggestion, reccomendation, bug-fixes please open an issue. If you want to contribute to the project, either through data sets, experiments, presets, or fix stuff, please see our Contribution page.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(Back to top)

:bulb: Vision

  • clean, descriptive specification syntax -- based on modern object-oriented design principles for data science.
  • fair model assessment and benchmarking -- Easily build and evaluate your Data Evaluators
  • easily extensible -- Easily add your own data sets,

(Back to top)

:classical_building: License

Distributed under the MIT License. See LICENSE.txt for more information.

(Back to top)

Cite Us

If you found the library or the paper useful, please cite us!

@article{
    jiang2023opendataval,
    title={OpenDataVal: a Unified Benchmark for Data Valuation},
    author={Kevin Fu Jiang and Weixin Liang and James Zou and Yongchan Kwon},
    booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2023},
    url={https://openreview.net/forum?id=eEK99egXeB}
}

(Back to top)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opendataval-1.3.0.tar.gz (76.0 kB view details)

Uploaded Source

Built Distribution

opendataval-1.3.0-py3-none-any.whl (108.5 kB view details)

Uploaded Python 3

File details

Details for the file opendataval-1.3.0.tar.gz.

File metadata

  • Download URL: opendataval-1.3.0.tar.gz
  • Upload date:
  • Size: 76.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for opendataval-1.3.0.tar.gz
Algorithm Hash digest
SHA256 28df5ba3ece654f1906cfffb8e4811d8dc06ccc70394a14eddf766a6ee08ba3c
MD5 8cea7c0b263ff4258e06269261252828
BLAKE2b-256 9a7936ae1c6ca5a5c74972d56cbf5212c676f307de85b6c6e3a9773f7543b0bb

See more details on using hashes here.

File details

Details for the file opendataval-1.3.0-py3-none-any.whl.

File metadata

  • Download URL: opendataval-1.3.0-py3-none-any.whl
  • Upload date:
  • Size: 108.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for opendataval-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 511e0b2d19169f8cd7bf2759bbd549059b5e3aca5ed7f547564b7b54edc17837
MD5 24a255f8ce338ecd6d0ee855070ef2ab
BLAKE2b-256 0c2b974a141f0919098a6886852af325964094fcb9fe18472338e535b202ed84

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page