Skip to main content

Sparkly is a TF/IDF top-k blocking for entity matching system built on top of Apache Spark and PyLucene.

Project description

license

Sparkly

Welcome to Sparkly! Sparkly is a TF/IDF top-k blocking for entity matching system built on top of Apache Spark and PyLucene.

Paper and Data

A link to our paper can be found here. Data used in the paper can be found here.

Quick Start: Sparkly in 30 Seconds

There are three main steps to running Sparkly,

  1. Reading Data
spark = SparkSession.builder.getOrCreate()

table_a = spark.read.parquet('./examples/data/abt_buy/table_a.parquet')
table_b = spark.read.parquet('./examples/data/abt_buy/table_b.parquet')
  1. Index Building
config = IndexConfig(id_col='_id')
config.add_field('name', ['3gram'])

index = LuceneIndex('/tmp/example_index/', config)
index.upsert_docs(table_a)
  1. Blocking
query_spec = index.get_full_query_spec()

candidates = Searcher(index).search(table_b, query_spec, id_col='_id', limit=50)
candidates.show()

Installing Dependencies

Python

Sparkly has been tested for Python 3.10 on Ubuntu 22.04.

PyLucene

Unfortunately PyLucene is not available in PyPI, to install PyLucene see PyLucene docs. Sparkly has been tested with PyLucene 9.4.1.

Other Requirements

Once PyLucene has been installed, Sparkly can be installed with pip by running the following command in the root directory of this repository.

$ python3 -m pip install .

Tutorials

To get started with Sparkly we recommend starting with the IPython notebook included with the repository examples/example.ipynb.

Additional examples of how to use Sparkly are provided under the examples/ directory in this repository.

How It Works

Sparkly is built to do blocking for entity matching. There have been many solutions developed to address this problem, from basic SQL joins to deep learning based approaches. Sparkly takes a top-k approach to blocking, in particular, each search record is paired with the top-k records with the highest BM25 scores. In terms of SQL this might look something like executing this query for each record,

SELECT id, BM25(<QUERY>, name) AS score 
FROM table_a 
ORDER BY score DESC
LIMIT <K>;

where QUERY derived from the search record.

This kind of search is very common in information retrieval and keyword search applications. In fact, this is exactly what Apache Lucene is designed to do. While this form of search produces high quality results, it can also be very compute intensive, hence to speed up search, we leverage PySpark to distribute the computation. By using PySpark we can easily leverage a large number of machines to perform search without having to rely on approximation algorithms.

API Docs

API docs can be found here

Tips for Installing PyLucene

For tips on installing PyLucene take a look at this readme.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sparkly_em-0.1.0.tar.gz (758.0 kB view details)

Uploaded Source

Built Distribution

sparkly_em-0.1.0-py3-none-any.whl (38.8 kB view details)

Uploaded Python 3

File details

Details for the file sparkly_em-0.1.0.tar.gz.

File metadata

  • Download URL: sparkly_em-0.1.0.tar.gz
  • Upload date:
  • Size: 758.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.7

File hashes

Hashes for sparkly_em-0.1.0.tar.gz
Algorithm Hash digest
SHA256 13ce764fc7943a7c4a21f8f64def34a698f8b1b71901be9912d37cbc946dbef6
MD5 16713b237395cb12ce161ae18daabc50
BLAKE2b-256 64905426bf7d7032d9f93d2f1362dde69b1aabab17d11f1877f7041a03be1e23

See more details on using hashes here.

File details

Details for the file sparkly_em-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: sparkly_em-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.7

File hashes

Hashes for sparkly_em-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 aaadcbeab5426d64ef2ae67fc5f46112cee218d2aa6d24c66068d72fdbfc1033
MD5 bac70abd9d7b40e5c856578ec36053e1
BLAKE2b-256 b1a135a4b686ad921adab06dd263aa1edf1fbd655ce3e83efa8bcde0b8ce1df0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page