Skip to main content

An End-to-End Evaluation Framework for Entity Resolution Systems.

Project description

Github Action workflow status and link. PyPI release badge and link. Documentation status badge and link. Journal of Open Source Software publication badge and link.

🔍 ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems

ER-Evaluation is a Python package for the evaluation of entity resolution (ER) systems.

It provides an entity-centric approach to evaluation. Given a sample of resolved entities, it provides:

  • summary statistics, such as average cluster size, matching rate, homonymy rate, and name variation rate.

  • comparison statistics between entity resolutions, such as proportion of links from one which is also in the other, and vice-versa.

  • performance estimates with uncertainty quantification, such as precision, recall, and F1 score estimates, as well as B-cubed and cluster metric estimates.

  • error analysis, such as cluster-level error metrics and analysis tools to find root cause of errors.

  • convenience visualization tools.

For more information on how to resolve a sample of entities for evaluation and model training, please refer to our data labeling guide.

Installation

Install the released version from PyPI using:

pip install er-evaluation

Or install the development version using: .. code:: bash

pip install git+https://github.com/Valires/er-evaluation.git

Documentation

Please refer to the documentation website er-evaluation.readthedocs.io.

Usage Examples

Please refer to the User Guide or our Visualization Examples for a complete usage guide.

In summary, here’s how you might use the package.

  1. Import your predicted disambiguations and reference benchmark dataset. The benchmark dataset should contain a sample of disambiguated entities.

import er_evaluation as ee

predictions, reference = ee.load_pv_disambiguations()
  1. Plot summary statistics and compare disambiguations.

ee.plot_summaries(predictions)
media/plot_summaries.png
ee.plot_comparison(predictions)
media/plot_comparison.png
  1. Define sampling weights and estimate performance metrics.

ee.plot_estimates(predictions, {"sample":reference, "weights":"cluster_size"})
media/plot_estimates.png
  1. Perform error analysis using cluster-level explanatory features and cluster error metrics.

ee.make_dt_regressor_plot(
        y,
        weights,
        features_df,
        numerical_features,
        categorical_features,
        max_depth=3,
        type="sunburst"
)
media/plot_decisiontree.png

Development Philosophy

ER-Evaluation is designed to be a unified source of evaluation tools for entity resolution systems, adhering to the Unix philosophy of simplicity, modularity, and composability. The package contains Python functions that take standard data structures such as pandas Series and DataFrames as input, making it easy to integrate into existing workflows. By importing the necessary functions and calling them on your data, you can easily use ER-Evaluation to evaluate your entity resolution system without worrying about custom data structures or complex architectures.

Citation

Please acknowledge the publications below if you use ER-Evaluation:

  • Binette, Olivier. (2022). ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems. Available online at github.com/Valires/ER-Evaluation

  • Binette, Olivier, Sokhna A York, Emma Hickerson, Youngsoo Baek, Sarvo Madhavan, Christina Jones. (2022). Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org. arXiv e-prints: arxiv:2210.01230

  • Upcoming: “An End-to-End Framework for the Evaluation of Entity Resolution Systems With Application to Inventor Name Disambiguation”

Public License

Changelog

2.3.0 (November 29, 2023)

  • Fix handling of NaN values in compress_memberships()

2.2.1 (November 8, 2023)

  • Small fixes to paper and documentation.

2.2.0 (October 26, 2023)

  • Streamline package structure

  • Additional tests

  • Improved documentation

2.1.0 (June 02, 2023)

  • Add sunburst visualization for decision tree regressors

  • Add decision tree regression pipeline for subgroup discovery

  • Add search utilities

  • Prepare submission to JOSS

2.0.0 (March 27, 2023)

  • Improve documentation

  • Add handling of NA values

  • Bug fixes

  • Add datasets module

  • Add visualization functions

  • Performance improvements

  • BREAKING: error_analysis functions have been renamed.

  • BREAKING: estimators have been renamed.

  • Added estimators support for sensitivity analyses

  • Added fairness plots

  • Performance improvements

  • Added compress_memberships() function for performance improvements.

1.2.0 (January 11, 2022)

  • Refactoring and documentation overhaul.

1.1.0 (January 10, 2022)

  • Added additional error metrics, performance evaluation metrics, and performance estimators.

  • Added record-level error metrics and error analysis tools.

1.0.2 (December 5, 2022)

  • Update setup.py with find_packages()

1.0.1 (November 30, 2022)

  • Add “normalize” option to plot_cluster_sizes_distribution.

  • Fix bugs in homonimy_rate and and name_variation_rate.

  • Fix bug in estimators.

1.0.0

  • Initial release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ER-Evaluation-2.3.0.tar.gz (66.2 MB view details)

Uploaded Source

Built Distribution

ER_Evaluation-2.3.0-py3-none-any.whl (65.3 MB view details)

Uploaded Python 3

File details

Details for the file ER-Evaluation-2.3.0.tar.gz.

File metadata

  • Download URL: ER-Evaluation-2.3.0.tar.gz
  • Upload date:
  • Size: 66.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.0

File hashes

Hashes for ER-Evaluation-2.3.0.tar.gz
Algorithm Hash digest
SHA256 f6d87670cbeda096bc624f25eeb4bc85e63227eab7a346d35e2d12710c368715
MD5 7aa0a44da19a6def371c974cbc3d99f8
BLAKE2b-256 f952daf8b12798a8ff8dc83237eaa7d73ca93fc222c747297ac2c281ee5ba59b

See more details on using hashes here.

File details

Details for the file ER_Evaluation-2.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for ER_Evaluation-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a42f32e2d9bd81c35f2dc242563c38bfe80dd18cba2ede246370c8fed9d006da
MD5 c3838f8df5d104decd916b271caf0087
BLAKE2b-256 17e70d0f01e1092de682de7da4fd02e93ada2cf01151f040d955c099832838df

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page