A toy Markov chain implementation.
Vokram is a toy Markov chain library that is most likely implemented incorrectly and extremely inefficiently.
Use pip to install:
pip install vokram
Command Line Usage
Pipe a body of text into vokram and it will generate some (hopefully) plausible sentences synthesized from that body of text:
$ cat the_art_of_war.txt | vokram Spies cannot be obtained inductively from experience, nor by any danger.
You can control the maximum number of words in the output and the n-gram size used when building the Markov model. All command line options are given below:
$ vokram --help
usage: vokram [-h] [-w NUM_WORDS] [-n NGRAM_SIZE] Generates plausible new sentences from a corpus provided on STDIN. optional arguments: -h, --help show this help message and exit -w NUM_WORDS, --num-words NUM_WORDS Maximum number of words in the resulting sentence. -n NGRAM_SIZE, --ngram-size NGRAM_SIZE
Vokram can also be used as a plain old Python library:
>>> import vokram >>> corpus = open('the_art_of_war.txt') >>> model = vokram.build_word_model(corpus, 2) >>> vokram.markov_words(model, 25)) 'Hence it is not supreme excellence; supreme excellence consists in breaking the enemy's few.'
Vokram was made with inspiration from this simple and approachable Python implementation and explanation.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size vokram-1.0.1.tar.gz (6.0 kB)||File type Source||Python version None||Upload date||Hashes View|