Skip to main content

A toy Markov chain implementation.

Project description

Vokram is a toy Markov chain library that is most likely implemented incorrectly and extremely inefficiently.

Installation

Use pip to install:

pip install vokram

Usage

Command Line Usage

Pipe a body of text into vokram and it will generate some (hopefully) plausible sentences synthesized from that body of text:

$ cat the_art_of_war.txt | vokram
Spies cannot be obtained inductively from experience, nor by any danger.

You can control the maximum number of words in the output and the n-gram size used when building the Markov model. All command line options are given below:

$ vokram --help

Outputs:

usage: vokram [-h] [-w NUM_WORDS] [-n NGRAM_SIZE]

Generates plausible new sentences from a corpus provided on STDIN.

optional arguments:
  -h, --help            show this help message and exit
  -w NUM_WORDS, --num-words NUM_WORDS
                        Maximum number of words in the resulting sentence.
  -n NGRAM_SIZE, --ngram-size NGRAM_SIZE

Library Usage

Vokram can also be used as a plain old Python library:

>>> import vokram
>>> corpus = open('the_art_of_war.txt')
>>> model = vokram.build_word_model(corpus, 2)
>>> vokram.markov_words(model, 25))
'Hence it is not supreme excellence; supreme excellence consists in breaking the enemy's few.'

Credits

Vokram was made with inspiration from this simple and approachable Python implementation and explanation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vokram-1.0.1.tar.gz (6.0 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page