Skip to main content

Fast probabilistic data linkage at scale

Project description

image pypi Downloads Documentation

Fast, accurate and scalable probabilistic data linkage using your choice of SQL backend.

splink is a Python package for probabilistic record linkage (entity resolution).

Its key features are:

  • It is extremely fast. It is capable of linking a million records on a laptop in around a minute.

  • It is highly accurate, with support for term frequency adjustments, and sophisticated fuzzy matching logic.

  • Linking jobs can be executed in Python (using the DuckDB package), or using big-data backends like AWS Athena and Spark to link 100+ million records.

  • Training data is not required because models can be trained using an unsupervised approach.

  • It produces a wide variety of interactive outputs, helping users to understand their model and diagnose linkage problems.

The core linkage algorithm is an implementation of Fellegi-Sunter's model of record linkage, with various customisations to improve accuracy.

What does Splink do?

Splink deduplicates and links records from datasets that lack a unique identifier.

For example, a few of your records may look like this:

row_id first_name surname dob city
1 lucas smith 1984-01-02 London
2 lucas smyth 1984-07-02 Manchester
3 lucas smyth 1984-07-02
4 david jones Leeds
5 david jones 1990-03-21 Leeds

Splink produces pairwise predictions of the links:

row_id_l row_id_r match_probability
1 2 0.9
1 3 0.85
2 3 0.92
4 5 0.7

And clusters the predictions to produce an estimated unique id:

cluster_id row_id
a 1
a 2
a 3
b 4
b 5


The homepage for the Splink documentation can be found here. Interactive demos can be found here, or by clicking the following Binder link:


The specification of the Fellegi Sunter statistical model behind splink is similar as that used in the R fastLink package. Accompanying the fastLink package is an academic paper that describes this model. A series of interactive articles also explores the theory behind Splink.


Splink supports python 3.7+. To obtain the latest released version of splink:

pip install splink


The following code demonstrates how to estimate the parameters of a deduplication model, use it to identify duplicate records, and then use clustering to generate an estimated unique person ID.

For more detailed tutorials, please see here.

from splink.duckdb.duckdb_linker import DuckDBLinker
from splink.duckdb.duckdb_comparison_library import (

import pandas as pd

df = pd.read_csv("./tests/datasets/fake_1000_from_splink_demos.csv")

settings = {
    "link_type": "dedupe_only",
    "blocking_rules_to_generate_predictions": [
        "l.first_name = r.first_name",
        "l.surname = r.surname",
    "comparisons": [
        levenshtein_at_thresholds("first_name", 2),
        exact_match("city", term_frequency_adjustments=True),

linker = DuckDBLinker(df, settings)

blocking_rule_for_training = "l.first_name = r.first_name and l.surname = r.surname"

blocking_rule_for_training = "l.dob = r.dob"

pairwise_predictions = linker.predict()

clusters = linker.cluster_pairwise_predictions_at_threshold(pairwise_predictions, 0.95)



We are very grateful to ADR UK (Administrative Data Research UK) for providing the initial funding for this work as part of the Data First project.

We are extremely grateful to professors Katie Harron, James Doidge and Peter Christen for their expert advice and guidance in the development of Splink. We are also very grateful to colleagues at the UK's Office for National Statistics for their expert advice and peer review of this work. Any errors remain our own.

Project details

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

splink-3.3.9.tar.gz (1.4 MB view hashes)

Uploaded source

Built Distribution

splink-3.3.9-py3-none-any.whl (1.4 MB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page