Transparent Data Valuation
Project description
OpenDataVal: a Unified Benchmark for Data Valuation
Assessing the quality of individual data points is critical for improving model performance and mitigating biases. However, there is no way to systematically benchmark different algorithims.
OpenDataVal is an open-source initiative that with a diverse array of datasets/models (image, NLP, and tabular), data valuation algorithims, and evaluation tasks using just a few lines of code.
OpenDataVal also provides a leaderboards for data evaluation tasks. We've curated and added
artificial noise to some datasets. Create your own DataEvaluator
to top the leaderboards.
Overview | |
---|---|
Python | |
Dependencies | |
Documentation | |
CI/CD | |
Issues | |
License |
:sparkles: Features
Feature | Status | Links | Notes |
---|---|---|---|
Datasets | Stable | Docs | Embeddings available for image/NLP datasets |
Models | Stable | Docs | Support available for sk-learn models |
Data Evaluators | Stable | Docs | |
Experiments | Stable | Docs | |
Examples | Stable | ||
CLI | Experimental | opendataval --help |
No support for null values |
:hourglass_flowing_sand: Installation options
- Install with pip
pip install opendataval
- Clone the repo and install
git clone https://github.com/opendataval/opendataval.git make install
a. Install optional dependencies if you're contributingmake install-dev
b. If you want to pull in kaggle datasets, I'd reccomend looking how to add a kaggle folder to the current directory. Tutorial here
:zap: Quick Start
To set up an experiment on DataEvaluators. Feel free to change the source code as needed for a project.
from opendataval.experiment import ExperimentMediator
exper_med = ExperimentMediator.model_factory_setup(
dataset_name='iris',
force_download=False,
train_count=100,
valid_count=50,
test_count=50,
model_name='ClassifierMLP',
train_kwargs={'epochs': 5, 'batch_size': 20},
)
list_of_data_evaluators = [ChildEvaluator(), ...] # Define evaluators here
eval_med = exper_med.compute_data_values(list_of_data_evaluators)
# Runs a discover the noisy data experiment for each DataEvaluator and plots
data, fig = eval_med.plot(discover_corrupted_sample)
# Runs non-plottable experiment
data = eval_method.evaluate(noisy_detection)
:computer: CLI
opendataval
comes with a quick CLI tool, The tool is under development and the template for a csv input is found at cli.csv
. Note that for kwarg arguments, the input must be valid json.
To use run the following command if installed with make install
:
opendataval --file cli.csv -n [job_id] -o [path/to/file/]
To run without installing the script:
python opendataval --file cli.csv -n [job_id] -o [path/to/file/]
:control_knobs: API
Here are the 4 interacting parts of opendataval
DataFetcher
, Loads data and holds meta data regarding splitsModel
, trainable prediction model.DataEvaluator
, Measures the data values of input data point for a specified model.ExperimentMediator
, facilitates experiments regarding data values across severalDataEvaluator
s
DataFetcher
The DataFetcher takes the name of a Register
dataset and loads, transforms, splits, and adds noise to the data set.
from opendataval.dataloader import DataFetcher
DataFetcher.datasets_available() # ['dataset_name1', 'dataset_name2']
fetcher = DataFetcher(dataset_name='dataset_name1')
fetcher = fetcher.split_dataset_by_count(70, 20, 10)
fetcher = fetcher.noisify(mix_labels, noise_rate=.1)
x_train, y_train, x_valid, y_valid, x_test, y_test = fetcher.datapoints
Model
Model
is the predictive model for Data Evaluators.
from opendataval.model import LogisticRegression
model = LogisticRegression(input_dim, output_dim)
model.fit(x, y)
model.predict(x)
>>> torch.Tensor(...)
DataEvaluator
We have a catalog of DataEvaluator
to run experiments. To do so, input the Model
, DataFetcher
, and an evaluation metric (such as accuracy).
from opendataval.dataval.ame import AME
dataval = (
AME(num_models=8000)
.train(fetcher=fetcher, pred_model=model, metric=metric)
)
data_values = dataval.data_values # Cached values
data_values = dataval.evaluate_data_values() # Recomputed values
>>> np.ndarray([.888, .132, ...])
ExperimentMediator
ExperimentMediator
is helps make a cohesive and controlled experiment. NOTE Warnings are raised if errors occur in a specific DataEvaluator
.
expermed = ExperimentrMediator(fetcher, model, train_kwargs, metric_name).compute_data_values(data_evaluators)
Run experiments by passing in an experiment function: (DataEvaluator, DataFetcher, ...) - > dict[str, Any]
. There are 5 found exper_methods.py
with three being plotable.
df = expermed.evaluate(noisy_detection)
df, figure = expermed.plot(discover_corrupted_sample)
For more examples, please refer to the Documentation
:medal_sports: opendataval Leaderboards
For datasets that start with the prefix challenge, we provide leaderboards. Compute the data values with an ExperimentMediator
and use the save_dataval
function to save a csv. Upload it to here! Uploading will allow us to systematically compare your DataEvaluator
against others in the field.
The available challenges are currently:
challenge-iris
exper_med = ExperimentMediator.model_factory_setup(
dataset_name='challenge-...', model_name=model_name, train_kwargs={...}, metric_name=metric_name
)
exper_med.compute_data_values([custom_data_evaluator]).evaluate(save_dataval, save_output=True)
:wave: Contributing
If you have a quick suggestion, reccomendation, bug-fixes please open an issue. If you want to contribute to the project, either through data sets, experiments, presets, or fix stuff, please see our Contribution page.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
:bulb: Vision
- clean, descriptive specification syntax -- based on modern object-oriented design principles for data science.
- fair model assessment and benchmarking -- Easily build and evaluate your Data Evaluators
- easily extensible -- Easily add your own data sets, data evaluators, models, tests etc!
:classical_building: License
Distributed under the MIT License. See LICENSE.txt
for more information.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for opendataval-1.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e7a585c65090569a1941e92f0ebc71837f72ea16597d4cc1f8b4715f226c55ed |
|
MD5 | 4cfd61fe855d6c0b6338b74c9cb584f8 |
|
BLAKE2b-256 | 25c8810a0431a0023acc8d4f33fe5971cdfc1e1b0b666931bc9de00cc497eac3 |