Efficient fuzzy matching for Polars DataFrames with support for multiple string similarity algorithms
Project description
pl-fuzzy-frame-match
High-performance fuzzy matching for Polars DataFrames that intelligently combines exact fuzzy matching with approximate joins for optimal performance on datasets of any size.
🚀 Key Innovation: Hybrid Matching Approach
This library automatically selects the best matching strategy based on your data:
- Small datasets (< 100M comparisons): Uses exact fuzzy matching with full cross-join
- Large datasets (≥ 100M comparisons): Automatically switches to approximate nearest neighbor joins using
polars-simed - Intelligent optimization: Pre-filters candidates using approximate methods, then applies exact fuzzy scoring
This hybrid approach means you get:
- ✅ Best-in-class performance regardless of data size
- ✅ High accuracy with configurable similarity thresholds
- ✅ Memory efficiency through chunked processing
- ✅ No manual optimization needed - the library handles it automatically
Features
- 🚀 Dual-Mode Performance: Combines exact fuzzy matching with approximate joins
- 🎯 Multiple Algorithms: Support for Levenshtein, Jaro, Jaro-Winkler, Hamming, Damerau-Levenshtein, and Indel
- 🔧 Smart Optimization: Automatic query optimization based on data uniqueness and size
- 💾 Memory Efficient: Chunked processing and intelligent caching for massive datasets
- 🔄 Incremental Matching: Support for multi-column fuzzy matching with result filtering
- ⚡ Automatic Strategy Selection: No configuration needed - automatically picks the fastest approach
Installation
pip install pl-fuzzy-frame-match
Or using Poetry:
poetry add pl-fuzzy-frame-match
Performance Benchmarks
Performance comparison on commodity hardware (M3 Mac, 36GB RAM):
| Dataset Size | Cartesian Product | Standard Cross Join Fuzzy match | Automatic Selection | Speedup |
|---|---|---|---|---|
| 500 × 400 | 200K | 0.04s | 0.03s | 1.3x |
| 3K × 2K | 6M | 0.39s | 0.39s | 1x |
| 10K × 8K | 80M | 18.67s | 18.79s | 1x |
| 15K × 10K | 150M | 40.82s | 1.45s | 28x |
| 40K × 30K | 1.2B | 363.50s | 4.75s | 76x |
| 400K × 10K | 4B | Skipped* | 34.52s | ∞ |
*Skipped due to prohibitive runtime
Key Observations:
- Small to Medium datasets (< 100M): Automatic selection uses standard cross join for optimal speed and accuracy
- Large datasets (≥ 100M): Automatic selection switches to approximate matching first and then matches the dataframes
- Memory efficiency: Can handle billions of potential comparisons without running out of memory
Quick Start
import polars as pl
from pl_fuzzy_frame_match import fuzzy_match_dfs, FuzzyMapping
# Create sample dataframes
left_df = pl.DataFrame({
"name": ["John Smith", "Jane Doe", "Bob Johnson"],
"id": [1, 2, 3]
}).lazy()
right_df = pl.DataFrame({
"customer": ["Jon Smith", "Jane Does", "Robert Johnson"],
"customer_id": [101, 102, 103]
}).lazy()
# Define fuzzy matching configuration
fuzzy_maps = [
FuzzyMapping(
left_col="name",
right_col="customer",
threshold_score=80.0, # 80% similarity threshold
fuzzy_type="levenshtein"
)
]
# Perform fuzzy matching
result = fuzzy_match_dfs(
left_df=left_df,
right_df=right_df,
fuzzy_maps=fuzzy_maps,
logger=your_logger # Pass your logger instance
)
print(result)
Advanced Usage
Multiple Column Matching
# Match on multiple columns with different algorithms
fuzzy_maps = [
FuzzyMapping(
left_col="name",
right_col="customer_name",
threshold_score=85.0,
fuzzy_type="jaro_winkler"
),
FuzzyMapping(
left_col="address",
right_col="customer_address",
threshold_score=75.0,
fuzzy_type="levenshtein"
)
]
result = fuzzy_match_dfs(left_df, right_df, fuzzy_maps, logger)
Supported Algorithms
- levenshtein: Edit distance between two strings
- jaro: Jaro similarity
- jaro_winkler: Jaro-Winkler similarity (good for name matching)
- hamming: Hamming distance (requires equal length strings)
- damerau_levenshtein: Like Levenshtein but includes transpositions
- indel: Insertion/deletion distance
How It Works: The Best of Both Worlds
The library intelligently combines two approaches based on your data size:
For Regular Datasets (< 100M potential matches)
- Preprocessing: Analyzes column uniqueness to optimize join strategy
- Cross Join: Creates all possible combinations
- Exact Scoring: Calculates precise similarity scores using your chosen algorithm
- Filtering: Returns only matches above the threshold
For Large Datasets (≥ 100M potential matches)
- Approximate Candidate Selection: Uses
polars-simedto quickly find likely matches - Chunked Processing: Processes large datasets in memory-efficient chunks
- Reduced Comparisons: Only scores the most promising pairs instead of all combinations
- Final Scoring: Applies exact fuzzy matching to the reduced candidate set
The Magic: Automatic Strategy Selection
# The library automatically determines the best approach:
if cartesian_product_size >= 100_000_000 and has_polars_simed:
# Use approximate join for initial candidate selection
# This reduces a 1B comparison problem to ~1M comparisons
use_approximate_matching()
else:
# Use traditional cross join for smaller datasets
use_exact_matching()
This means you can use the same API whether matching 1,000 or 100 million records!
Performance Tips
- Large dataset matching: Install
polars-simedto enable approximate matching:pip install polars-simed
- Optimal threshold: Start with higher thresholds (80-90%) for better performance
- Column selection: Use columns with high uniqueness for better candidate reduction
- Algorithm choice:
jaro_winkler: Best for names and short stringslevenshtein: Best for general text and typosdamerau_levenshtein: Best when transpositions are common
- Memory management: The library automatically chunks large datasets, but you can monitor memory usage with logging
Requirements
- Python >= 3.9
- Polars >= 1.8.2, < 2.0.0
- polars-distance ~= 0.4.3
- polars-simed >= 0.3.4 (optional, for large datasets)
License
MIT License - see LICENSE file for details
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Acknowledgments
Built on top of the excellent Polars DataFrame library and polars-distance for string similarity calculations.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pl_fuzzy_frame_match-0.1.1.tar.gz.
File metadata
- Download URL: pl_fuzzy_frame_match-0.1.1.tar.gz
- Upload date:
- Size: 17.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6fef3c146ce09484a6a75dd98ffaa91a323fbac21030b0b422ce96702cf4f76f
|
|
| MD5 |
a096485018453d56d545f60bd91ff098
|
|
| BLAKE2b-256 |
fc40010a4f0d44fa566c498fc5af4f53479b95c382e8a058c1ae6c70ae07c783
|
Provenance
The following attestation bundles were made for pl_fuzzy_frame_match-0.1.1.tar.gz:
Publisher:
ci.yaml on Edwardvaneechoud/pl-fuzzy-frame-match
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pl_fuzzy_frame_match-0.1.1.tar.gz -
Subject digest:
6fef3c146ce09484a6a75dd98ffaa91a323fbac21030b0b422ce96702cf4f76f - Sigstore transparency entry: 224153432
- Sigstore integration time:
-
Permalink:
Edwardvaneechoud/pl-fuzzy-frame-match@1e8af6616d48c812627040d2ed03ded58f693cd7 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Edwardvaneechoud
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yaml@1e8af6616d48c812627040d2ed03ded58f693cd7 -
Trigger Event:
push
-
Statement type:
File details
Details for the file pl_fuzzy_frame_match-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pl_fuzzy_frame_match-0.1.1-py3-none-any.whl
- Upload date:
- Size: 19.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70d72c4e784862818cc71f812747fbcd6adf361a8b80565cfde5667fc385c36c
|
|
| MD5 |
ebedf35d26150a01f7af32ed87571eec
|
|
| BLAKE2b-256 |
d7af77873cdba41ab7f984d0e8c21f3b88a7b35e9f966813dec5f887fbbcce5c
|
Provenance
The following attestation bundles were made for pl_fuzzy_frame_match-0.1.1-py3-none-any.whl:
Publisher:
ci.yaml on Edwardvaneechoud/pl-fuzzy-frame-match
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pl_fuzzy_frame_match-0.1.1-py3-none-any.whl -
Subject digest:
70d72c4e784862818cc71f812747fbcd6adf361a8b80565cfde5667fc385c36c - Sigstore transparency entry: 224153436
- Sigstore integration time:
-
Permalink:
Edwardvaneechoud/pl-fuzzy-frame-match@1e8af6616d48c812627040d2ed03ded58f693cd7 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Edwardvaneechoud
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yaml@1e8af6616d48c812627040d2ed03ded58f693cd7 -
Trigger Event:
push
-
Statement type: