Skip to main content
Help us improve Python packaging – donate today!

Measure the readability of a given text using surface characteristics

Project Description

A collection of functions that measure the readability of a given body of text using surface characteristics. These measures are basically linear regressions based on the number of words, syllables, and sentences.

The functionality is modeled after the UNIX style(1) command. Compared to the implementation as part of GNU diction, this version supports UTF-8 encoded text, but expects sentence-segmented and tokenized text. The syllabification and word type recognition is based on simple heuristics and only provides a rough measure.

NB: all readability formulas were developed for English, so the scales of the outcomes are only meaningful for English texts.


$ pip install


$ readability --help
Simple readability measures.

Usage: readability [--lang=<x>] [FILE]
or: readability [--lang=<x>] --csv FILES...

By default, input is read from standard input.
Text should be encoded with UTF-8,
one sentence per line, tokens space-separated.

  -L, --lang=<x>   Set language (available: de, nl, en).
  --csv            Produce a table in comma separated value format on
                   standard output given one or more filenames.
  --tokenizer=<x>  Specify a tokenizer including options that will be given
                   each text on stdin and should return tokenized output on
                   stdout. Not applicable when reading from stdin.

For proper results, the text should be tokenized.

Example using ucto:

$ ucto -L en -n -s '' "CONRAD, Joseph - Lord Jim.txt" | readability
readability grades:
    Kincaid:                     4.95
    ARI:                         5.78
    Coleman-Liau:                6.87
    FleschReadingEase:          86.18
    GunningFogIndex:             9.4
    LIX:                        30.97
    SMOGIndex:                   9.2
    RIX:                         2.39
sentence info:
    characters_per_word:         4.19
    syll_per_word:               1.25
    words_per_sentence:         14.92
    sentences_per_paragraph:        12.6
    characters:             552074
    syllables:              164207
    words:                  131668
    sentences:                8823
    paragraphs:                700
    long_words:              21122
    complex_words:           11306
word usage:
    tobeverb:                 3909
    auxverb:                  1632
    conjunction:              4413
    pronoun:                 18104
    preposition:             19271
    nominalization:           1216
sentence beginnings:
    pronoun:                  2593
    interrogative:             215
    article:                   632
    subordination:             124
    conjunction:               240
    preposition:               404

The option --csv collects readability measures for a number of texts in a table. To tokenize documents on-the-fly when using this option, use the --tokenizer option. Example with the “tokenize” tool:

$ readability --csv --tokenizer='tokenizer -L en-u8 -P -S -E "" -N' */*.txt >readabilitymeasures.csv


The following readability metrics are included:


For better readability measures, consider the following:

Release history Release notifications

This version
History Node


History Node


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
readability-0.2.tar.gz (10.8 kB) Copy SHA256 hash SHA256 Source None Aug 11, 2015

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page