Skip to main content

Fast probabilistic data linkage at scale

Project description

Fast, accurate and scalable probabilistic data linkage using your choice of SQL backend.

image

splink is a Python package for probabilistic record linkage (entity resolution).

Its key features are:

  • It is extremely fast. It is capable of linking a million records on a laptop in around a minute.

  • It is highly accurate, with support for term frequency adjustments, and sophisticated fuzzy matching logic.

  • It supports running linkage against multiple SQL backends, meaning it's capable of running at any scale. For smaller linkages of up to a few million records, no additional infrastructure is needed . For larger linkages, Splink currently supports Apache Spark or AWS Athena as backends.

  • It produces a wide variety of interactive outputs, helping users to understand their model and diagnose linkage problems.

The core linkage algorithm is an implementation of Fellegi-Sunter's canonical model of record linkage, with various customisations to improve accuracy. Splink includes an implementation of the Expectation Maximisation algorithm, meaning that record linkage can be performed using an unsupervised approch (i.e. labelled training data is not needed).

Documentation

The homepage for the Splink documentation can be found here. Interactive demos can be found here, or by clicking the following Binder link: Binder

The specification of the Fellegi Sunter statistical model behind splink is similar as that used in the R fastLink package. Accompanying the fastLink package is an academic paper that describes this model. A series of interactive articles also explores the theory behind Splink.

Quickstart

The following code demonstrates how to estimate the parameters of a deduplication model, and then use it to identify duplicate records.

For more detailed tutorials, please see here.

from splink.duckdb.duckdb_linker import DuckDBLinker
from splink.duckdb.duckdb_comparison_library import (
    exact_match,
    levenshtein_at_thresholds,
)

import pandas as pd
df = pd.read_csv("./tests/datasets/fake_1000_from_splink_demos.csv")

settings = {
    "link_type": "dedupe_only",
    "blocking_rules_to_generate_predictions": [
        "l.first_name = r.first_name",
        "l.surname = r.surname",
    ],
    "comparisons": [
        levenshtein_at_thresholds("first_name", 2),
        exact_match("surname"),
        exact_match("dob"),
        exact_match("city", term_frequency_adjustments=True),
        exact_match("email"),
    ],
}

linker = DuckDBLinker(df, settings)
linker.estimate_u_using_random_sampling(target_rows=1e6)

blocking_rule_for_training = "l.first_name = r.first_name and l.surname = r.surname"
linker.estimate_parameters_using_expectation_maximisation(blocking_rule_for_training)

blocking_rule_for_training = "l.dob = r.dob"
linker.estimate_parameters_using_expectation_maximisation(blocking_rule_for_training)

scored_comparisons = linker.predict()

Acknowledgements

We are very grateful to ADR UK (Administrative Data Research UK) for providing the initial funding for this work as part of the Data First project.

We are also very grateful to colleagues at the UK's Office for National Statistics for their expert advice and peer review of this work.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

splink-3.0.0.dev20.tar.gz (422.0 kB view hashes)

Uploaded Source

Built Distribution

splink-3.0.0.dev20-py3-none-any.whl (453.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page