Skip to main content

Compute distance between the two texts.

Project description

TextDistance logo

TextDistance logo

Build Status PyPI version Status Code size License

TextDistance – python library for comparing distance between two or more sequences by many algorithms.


  • 30+ algorithms
  • Pure python implementation
  • Simple usage
  • More than two sequences comparing
  • Some algorithms have more than one implementation in one class.
  • Optional numpy usage for maximum speed.


Edit based

Algorithm Class Functions
Hamming Hamming hamming
MLIPNS Mlipns mlipns
Levenshtein Levenshtein levenshtein
Damerau-Levenshtein DamerauLevenshtein damerau_levenshtein
Jaro-Winkler JaroWinkler jaro_winkler, jaro
Strcmp95 StrCmp95 strcmp95
Needleman-Wunsch NeedlemanWunsch needleman_wunsch
Gotoh Gotoh gotoh
Smith-Waterman SmithWaterman smith_waterman

Token based

Algorithm Class Functions
Jaccard index Jaccard jaccard
Sørensen–Dice coefficient Sorensen sorensen, sorensen_dice, dice
Tversky index Tversky tversky
Overlap coefficient Overlap overlap
Tanimoto distance Tanimoto tanimoto
Cosine similarity Cosine cosine
Monge-Elkan MongeElkan monge_elkan
Bag distance Bag bag

Sequence based

Algorithm Class Functions
longest common subsequence similarity LCSSeq lcsseq
longest common substring similarity LCSStr lcsstr
Ratcliff-Obershelp similarity RatcliffObershelp ratcliff_obershelp

Compression based

Normalized compression distance with different compression algorithms.

Classic compression algorithms:

Algorithm Class Function
Arithmetic coding ArithNCD arith_ncd
RLE RLENCD rle_ncd

Normal compression algorithms:

Algorithm Class Function
Square Root SqrtNCD sqrt_ncd
Entropy EntropyNCD entropy_ncd

Work in progress algorithms that compare two strings as array of bits:

Algorithm Class Function
BZ2 BZ2NCD bz2_ncd
ZLib ZLIBNCD zlib_ncd

See blog post for more details about NCD.


Algorithm Class Functions
Editex Editex editex


Algorithm Class Functions
Prefix similarity Prefix prefix
Postfix similarity Postfix postfix
Length distance Length length
Identity similarity Identity identity
Matrix similarity Matrix matrix



Only pure python implementation:

pip install textdistance

With extra libraries for maximum speed:

pip install "textdistance[extras]"

With all libraries (required for benchmarking and testing):

pip install "textdistance[benchmark]"

With algorithm specific extras:

pip install "textdistance[Hamming]"

Algorithms with available extras: DamerauLevenshtein, Hamming, Jaro, JaroWinkler, Levenshtein.


Via pip:

pip install -e git+

Or clone repo and install with some extras:

git clone
pip install -e ".[benchmark]"


All algorithms have 2 interfaces:

  1. Class with algorithm-specific params for customizing.
  2. Class instance with default params for quick and simple usage.

All algorithms have some common methods:

  1. .distance(*sequences) – calculate distance between sequences.
  2. .similarity(*sequences) – calculate similarity for sequences.
  3. .maximum(*sequences) – maximum possible value for distance and similarity. For any sequence: distance + similarity == maximum.
  4. .normalized_distance(*sequences) – normalized distance between sequences. The return value is a float between 0 and 1, where 0 means equal, and 1 totally different.
  5. .normalized_similarity(*sequences) – normalized similarity for sequences. The return value is a float between 0 and 1, where 0 means totally different, and 1 equal.

Most common init arguments:

  1. qval – q-value for split sequences into q-grams. Possible values:
    • 1 (default) – compare sequences by chars.
    • 2 or more – transform sequences to q-grams.
    • None – split sequences by words.
  2. as_set – for token-based algorithms:
    • True – t and ttt is equal.
    • False (default) – t and ttt is different.


For example, Hamming distance:

import textdistance

textdistance.hamming('test', 'text')
# 1

textdistance.hamming.distance('test', 'text')
# 1

textdistance.hamming.similarity('test', 'text')
# 3

textdistance.hamming.normalized_distance('test', 'text')
# 0.25

textdistance.hamming.normalized_similarity('test', 'text')
# 0.75

textdistance.Hamming(qval=2).distance('test', 'text')
# 2

Any other algorithms have same interface.

Extra libraries

For main algorithms textdistance try to call known external libraries (fastest first) if available (installed in your system) and possible (this implementation can compare this type of sequences). Install textdistance with extras for this feature.

You can disable this by passing external=False argument on init:

import textdistance
hamming = textdistance.Hamming(external=False)
hamming('text', 'testit')
# 3

Supported libraries:

  1. abydos
  2. Distance
  3. jellyfish
  4. py_stringmatching
  5. pylev
  6. python-Levenshtein
  7. pyxDamerauLevenshtein


  1. DamerauLevenshtein
  2. Hamming
  3. Jaro
  4. JaroWinkler
  5. Levenshtein


Without extras installation:

algorithm library function time
DamerauLeven shtein jellyfish damerau_le venshtein_ distance 0.00965 294
DamerauLeven shtein pyxdamerau levenshtei n damerau_le venshtein_ distance 0.15137 8
DamerauLeven shtein pylev damerau_le venshtein 0.76646 1
DamerauLeven shtein textdist ance DamerauLeve nshtein 4.13463
DamerauLeven shtein abydos damerau_le venshtein 4.3831
Hamming Levenshtei n hamming 0.00144 28
Hamming jellyfish hamming_di stance 0.00240 262
Hamming distance hamming 0.03625 3
Hamming abydos hamming 0.03839 33
Hamming textdist ance Hamming 0.17678 1
Jaro Levenshtei n jaro 0.00313 561
Jaro jellyfish jaro_dista nce 0.00518 85
Jaro py_string matching jaro 0.18062 8
Jaro textdist ance Jaro 0.27891 7
JaroWinkler Levenshtei n jaro_winkl er 0.00319 735
JaroWinkler jellyfish jaro_winkl er 0.00540 443
JaroWinkler textdist ance JaroWinkler 0.28962 6
Levenshtein Levenshtei n distance 0.00414 404
Levenshtein jellyfish levenshtein _distance 0.00601 647
Levenshtein py_string matching levenshtein 0.25290 1
Levenshtein pylev levenshtein 0.56918 2
Levenshtein distance levenshtein 1.15726
Levenshtein abydos levenshtein 3.68451
Levenshtein textdist ance Levenshtein 8.63674

Total: 24 libs.

Yeah, so slow. Use TextDistance on production only with extras.

Textdistance use benchmark’s results for algorithm’s optimization and try to call fastest external lib first (if possible).

You can run benchmark manually on your system:

pip install textdistance[benchmark]
python3 -m textdistance.benchmark

TextDistance show benchmarks results table for your system and save libraries priorities into libraries.json file in TextDistance’s folder. This file will be used by textdistance for calling fastest algorithm implementation. Default libraries.json already included in package.


You can run tests via tox:

sudo pip3 install tox

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for textdistance, version 4.2.0
Filename, size File type Python version Upload date Hashes
Filename, size textdistance-4.2.0-py3-none-any.whl (29.1 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size textdistance-4.2.0.tar.gz (34.5 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page