WORK IN PROGRESS: Implementation in Apache Spark of the EM algorithm to estimate parameters of Fellegi-Sunter's canonical model of record linkage.
Project description
splink: Probabalistic record linkage at scale
WARNING: Splink is work in progress and is currently in beta testing. Please feel free to try it, but note this software is not fully tested, and the interface is likely to continue to change.
splink
implements Fellegi-Sunter's canonical model of record linkage in Apache Spark, including EM algorithm to estimate parameters of the model.
The aim of splink
is to:
-
Work at much greater scale than current open source implementations (100 million records +).
-
Get results faster than current open source implementations - with runtimes of less than an hour.
-
Have a highly transparent methodology, so the match scores can be easily explained both graphically and in words
-
Have accuracy similar to some of the best alternatives
Interactive demo
You can run demos of splink
in an interactive Jupyter notebook by clicking the button below:
Documentation
Better docs to come. The best documentation is currently a series of demonstrations notebooks in the splink_demos repo.
Acknowledgements
We are grateful to ADR UK (Administrative Data Research UK) for providing funding for this work as part of the Data First project.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.