Skip to main content

A simple interface to datamade/dedupe to make probabilistic record linkage easy.

Project description

SuperDeduper

https://img.shields.io/pypi/v/superdeduper.svg https://img.shields.io/travis/dssg/superdeduper.svg https://codecov.io/gh/dssg/superdeduper/branch/master/graph/badge.svg Documentation Status Updates

A work-in-progress to provide a standard interface for deduplication of large databases with custom pre-processing and post-processing steps.

Interface

This provides a simple command-line program, superdeduper. Two configuration files specify the deduplication parameters and database connection settings. To run deduplication on a generated dataset, create a database.yml file that specifies the following parameters:

user:
password:
database:
host:
port:

You can now create a sample CSV file with:

$ python generate_fake_dataset.py
creating people: 100%|█████████████████████| 9500/9500 [00:21<00:00, 445.38it/s]
adding twins: 100%|█████████████████████████| 500/500 [00:00<00:00, 1854.72it/s]
writing csv:  47%|███████████▋             | 4666/10000 [00:42<00:55, 96.28it/s]

Once complete, store this example dataset in a database with:

$ python test/initialize_db.py
CREATE SCHEMA
DROP TABLE
CREATE TABLE
COPY 197617
ALTER TABLE
ALTER TABLE
UPDATE 197617

Now you can deduplicate this dataset. This will run dedupe as well as the custom pre-processing and post-processing steps as defined in config.yml:

$ superdeduper --config config.yml --db database.yml

Custom pre- and post-processing

In addition to running a database-level deduplication with dedupe, this script adds custom pre- and post-processing steps to improve the run-time and results, making this a hybrid between fuzzy matching and record linkage.

  • Pre-processing: Before running dedupe, this script does an exact-match deduplication. Some systems create many identical rows; this can make it challenging for dedupe to create an effective blocking strategy and generally makes the fuzzy matching much harder and time intensive.

  • Post-processing: After running dedupe, this script does an optional exact-match merge across subsets of columns. For example, in some instances an exact match of just the last name and social security number are sufficient evidence that two clusters are indeed the same identity.

Further steps

This script was based upon and extended from the example in dedupe-examples. It would be nice to use this common interface across all database types, and potentially even allow reading from flat CSV files.

History

0.1.0 (2016-12-14)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

superdeduper-0.1.5.tar.gz (73.5 kB view details)

Uploaded Source

Built Distribution

superdeduper-0.1.5-py2.py3-none-any.whl (12.9 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file superdeduper-0.1.5.tar.gz.

File metadata

File hashes

Hashes for superdeduper-0.1.5.tar.gz
Algorithm Hash digest
SHA256 ac6d56ef8de1e4cd0878eca879a9f4baebffc51f95946bfc0a654e3543de0d00
MD5 f8fab7d869bad8590aa32233e0a0da3b
BLAKE2b-256 f1609b1b18eb41e09d61ad6ea94f29cda7b05f590c9a8120435740d8a0145e1a

See more details on using hashes here.

File details

Details for the file superdeduper-0.1.5-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for superdeduper-0.1.5-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 0cc4a299195c6fa3cf7f10b4945d079589bd840739196ef288041fceb2e13b3c
MD5 384760051012b5b82a360b75c2d2abdc
BLAKE2b-256 73dda95c39cb4bea452ca3499c0bf7248d1c06168ed46cea48e706346d85321c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page