Skip to main content

A simple interface to datamade/dedupe to make probabilistic record linkage easy.

Project description

SuperDeduper

https://img.shields.io/pypi/v/superdeduper.svg https://img.shields.io/travis/dssg/superdeduper.svg https://codecov.io/gh/dssg/superdeduper/branch/master/graph/badge.svg Documentation Status Updates

A work-in-progress to provide a standard interface for deduplication of large databases with custom pre-processing and post-processing steps.

Interface

This provides a simple command-line program, superdeduper. Two configuration files specify the deduplication parameters and database connection settings. To run deduplication on a generated dataset, create a database.yml file that specifies the following parameters:

user:
password:
database:
host:
port:

You can now create a sample CSV file with:

$ python generate_fake_dataset.py
creating people: 100%|█████████████████████| 9500/9500 [00:21<00:00, 445.38it/s]
adding twins: 100%|█████████████████████████| 500/500 [00:00<00:00, 1854.72it/s]
writing csv:  47%|███████████▋             | 4666/10000 [00:42<00:55, 96.28it/s]

Once complete, store this example dataset in a database with:

$ python test/initialize_db.py
CREATE SCHEMA
DROP TABLE
CREATE TABLE
COPY 197617
ALTER TABLE
ALTER TABLE
UPDATE 197617

Now you can deduplicate this dataset. This will run dedupe as well as the custom pre-processing and post-processing steps as defined in config.yml:

$ superdeduper --config config.yml --db database.yml

Custom pre- and post-processing

In addition to running a database-level deduplication with dedupe, this script adds custom pre- and post-processing steps to improve the run-time and results, making this a hybrid between fuzzy matching and record linkage.

  • Pre-processing: Before running dedupe, this script does an exact-match deduplication. Some systems create many identical rows; this can make it challenging for dedupe to create an effective blocking strategy and generally makes the fuzzy matching much harder and time intensive.

  • Post-processing: After running dedupe, this script does an optional exact-match merge across subsets of columns. For example, in some instances an exact match of just the last name and social security number are sufficient evidence that two clusters are indeed the same identity.

Further steps

This script was based upon and extended from the example in dedupe-examples. It would be nice to use this common interface across all database types, and potentially even allow reading from flat CSV files.

History

0.1.0 (2016-12-14)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

superdeduper-0.1.6.tar.gz (73.5 kB view details)

Uploaded Source

Built Distribution

superdeduper-0.1.6-py2.py3-none-any.whl (12.9 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file superdeduper-0.1.6.tar.gz.

File metadata

File hashes

Hashes for superdeduper-0.1.6.tar.gz
Algorithm Hash digest
SHA256 87633050bc2d4783a29d9182abc8fa0a9bc054c7d0999900215588adf59d8994
MD5 220aef9f5d5939d0b83f2eb817ace4d5
BLAKE2b-256 93c29ad142873eeca68a1c378bec8393e2623f65667acfb616a1e3dafa876183

See more details on using hashes here.

File details

Details for the file superdeduper-0.1.6-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for superdeduper-0.1.6-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 e8eb23a3e4a90366a78d6c4cddddbbc887e2e8bf3b22c10406080cb6b7471559
MD5 da129bed9808776f6b3984fb8af08383
BLAKE2b-256 b9017670cbc64073cbc800327d603971fb1aa5da3607ce3cf9cc6828e0eb8342

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page