Tools to evaluate PatentsView's disambiguation algorithms
Project description
📊 PatentsView-Evaluation: Benchmark Disambiguation Algorithms
pv_evaluation is a Python package built to help advance research on author/inventor name disambiguation systems such as PatentsView. It provides:
- A large set of benchmark datasets for U.S. patents inventor name disambiguation.
- Disambiguation summary statistics, evaluation methodology, and performance estimators through the ER-Evaluation Python package.
See the project website for full documentation. The Examples page provides real-world examples of the use of pv_evaluation submodules.
Submodules
pv_evaluation has the following submodules:
- benchmark.data: Access to evaluation datasets and standardized comparison benchmarks. The following benchmark datasets are available:
- Academic Life Sciences (ALS) inventors benchmark.
- Israeli inventors benchmark.
- Engineering and Sciences (ENS) inventors benchmark.
- Lai's 2011 inventors benchmark.
- PatentsView's 2021 inventors benchmark.
- Binette et al.'s 2022 inventors benchmark.
- benchmark.report: Visualization of key monitoring and performance metrics.
- templates: Templated performance summary reports.
Installation
Install the released version of pv_evaluation using
pip install pv-evaluation
Rendering reports requires the installation of quarto from quarto.org.
Examples
Note: Working with the full patent data requires large amounts of memory (we suggest having 64GB RAM available).
See the examples page for complete reproducible examples. The examples below only provide a quick overview of pv_evaluation's functionality.
Metrics and Summary Statistics
Generate an html report summarizing properties of the current disambiguation algorithm (see this example):
from pv_evaluation.templates import render_inventor_disambiguation_report
render_inventor_disambiguation_report(
".",
disambiguation_files=["disambiguation_20211230.tsv", "disambiguation_20220630.tsv"],
inventor_not_disambiguated_file="g_inventor_not_disambiguated.tsv"
)
Benchmark Datasets
Access PatentsView-Evaluation's large collection of benchmark datasets:
from pv_evaluation.benchmark import *
load_lai_2011_inventors_benchmark()
load_israeli_inventors_benchmark()
load_patentsview_inventors_benchmark()
load_als_inventors_benchmark()
load_ens_inventors_benchmark()
load_binette_2022_inventors_benchmark()
load_air_umass_assignees_benchmark()
load_nber_subset_assignees_benchmark()
Representative Performance Evaluation
See this example of how representative performance estimates are obtained from Binette's 2022 benchmark dataset.
Citation
- Binette, Olivier, Sarvo Madhavan, Jack Butler, Beth Anne Card, Emily Melluso and Christina Jones. (2023). PatentsView-Evaluation: Evaluation Datasets and Tools to Advance Research on Inventor Name Disambiguation. arXiv e-prints: arxiv:2301.03591
- Binette, Olivier, Sokhna A York, Emma Hickerson, Youngsoo Baek, Sarvo Madhavan, Christina Jones. (2022). Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org. arXiv e-prints: arxiv:2210.01230
Contributing
Contribute code and documentation
Look through the GitHub issues for bugs and feature requests. To contribute to this package:
- Fork this repository
- Make your changes and update CHANGELOG.md
- Submit a pull request
- For maintainers: if needed, update the "release" branch and create a release.
A conda environment is provided for development convenience. To create or update this environment, make sure you have conda installed and then run make env
. You can then activate the development environment using conda activate pv-evaluation
.
The makefile provides other development utilities such as make black
to format Python files, make data
to re-generate benchmark datasets from raw data located on AWS S3, and make docs
to generate the documentation website.
Raw data
Raw public data is located on PatentsView's AWS S3 server at https://s3.amazonaws.com/data.patentsview.org/PatentsView-Evaluation/data-raw.zip. This zip file should be updated as needed to reflect datasets provided by this package and to ensure that original data sources are preserved without modification.
Testing
The minimal testing requirement for this package is a check that all code executes without error. We recommend placing execution checks in a runnable notebook and using the testbook package for execution within unit tests. User examples should also be provided to exemplify usage on real data.
Report bugs and submit feedback
Report bugs and submit feedback at https://github.com/PatentsView/PatentsView-Evaluation/issues.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pv_evaluation-2.1.1.tar.gz
.
File metadata
- Download URL: pv_evaluation-2.1.1.tar.gz
- Upload date:
- Size: 5.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 88c68a2410e532fbe09cdbbde63500893db430e93dab2623359dbaee2091e239 |
|
MD5 | 752583e9b54f80f99b417700a395fa9c |
|
BLAKE2b-256 | b064f49f518c6d466fa66a73d9c8d524d7abdbef3fe61fe03c1adc5f9fc31071 |
File details
Details for the file pv_evaluation-2.1.1-py3-none-any.whl
.
File metadata
- Download URL: pv_evaluation-2.1.1-py3-none-any.whl
- Upload date:
- Size: 5.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7ffaca0e606c462efc132f228e566c33d9f36bcdc8c3426a274afbb08d83f832 |
|
MD5 | 319c3e8a46491d038286b624b11500ba |
|
BLAKE2b-256 | d8ccaffc866a4651d60675ff9046bf5f0a65f3b97fa498b80c7c2c440b595440 |