Skip to main content

A simple interface to datamade/dedupe to make probabilistic record linkage easy.

Project description

pgdedupe

https://img.shields.io/pypi/v/pgdedupe.svg https://img.shields.io/travis/dssg/pgdedupe.svg https://codecov.io/gh/dssg/pgdedupe/branch/master/graph/badge.svg Documentation Status Updates

A work-in-progress to provide a standard interface for deduplication of large databases with custom pre-processing and post-processing steps.

Interface

This provides a simple command-line program, pgdedupe. Two configuration files specify the deduplication parameters and database connection settings. To run deduplication on a generated dataset, create a database.yml file that specifies the following parameters:

user:
password:
database:
host:
port:

You can now create a sample CSV file with:

$ python generate_fake_dataset.py --csv people.csv
creating people: 100%|█████████████████████| 9500/9500 [00:21<00:00, 445.38it/s]
adding twins: 100%|█████████████████████████| 500/500 [00:00<00:00, 1854.72it/s]
writing csv:  47%|███████████▋             | 4666/10000 [00:42<00:55, 96.28it/s]

Once complete, store this example dataset in a database with:

$ python test/initialize_db.py --db database.yml --csv people.csv
CREATE SCHEMA
DROP TABLE
CREATE TABLE
COPY 197617
ALTER TABLE
ALTER TABLE
UPDATE 197617

Now you can deduplicate this dataset. This will run dedupe as well as the custom pre-processing and post-processing steps as defined in config.yml:

$ pgdedupe --config config.yml --db database.yml

Custom pre- and post-processing

In addition to running a database-level deduplication with dedupe, this script adds custom pre- and post-processing steps to improve the run-time and results, making this a hybrid between fuzzy matching and record linkage.

  • Pre-processing: Before running dedupe, this script does an exact-match deduplication. Some systems create many identical rows; this can make it challenging for dedupe to create an effective blocking strategy and generally makes the fuzzy matching much harder and time intensive.

  • Post-processing: After running dedupe, this script does an optional exact-match merge across subsets of columns. For example, in some instances an exact match of just the last name and social security number are sufficient evidence that two clusters are indeed the same identity.

Further steps

This script was based upon and extended from the example in dedupe-examples. It would be nice to use this common interface across all database types, and potentially even allow reading from flat CSV files.

History

0.2.1 (2017-05-03)

  • Make command line arguments required, resulting in better error messages.

  • Refactored testing scripts to be more user-friendly.

0.2.0 (2017-04-19)

  • First release on PyPI (as pgdedupe).

0.1.0 (2016-12-14)

  • First release on PyPI (as superdeduper).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pgdedupe-0.2.1.tar.gz (69.5 kB view details)

Uploaded Source

Built Distribution

pgdedupe-0.2.1-py2.py3-none-any.whl (13.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file pgdedupe-0.2.1.tar.gz.

File metadata

  • Download URL: pgdedupe-0.2.1.tar.gz
  • Upload date:
  • Size: 69.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pgdedupe-0.2.1.tar.gz
Algorithm Hash digest
SHA256 2c7945c170f65a1a2b427b438e863b8c39dc8f54f02d93b0dde6866478bc8f3e
MD5 ecc8380a98a428d3ad86c0bcfa2608d8
BLAKE2b-256 ee6600bb46e97443d301f4f1cbeb54714f5235a33205ab96b2dbee0c3b3b28d9

See more details on using hashes here.

File details

Details for the file pgdedupe-0.2.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for pgdedupe-0.2.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 e1ee484d5529736eae5ce3acf6a2796179afd3003c7cdc23b35065f560fa5aad
MD5 b7189955b0014235e3bbd1bb880e4f01
BLAKE2b-256 744b54c1984b0e6280af932750454488baadc2df47c29da04946d672479636e4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page