Skip to main content

Markov chain generator with rudimentary prompt response

Project description

Conversational Markov

LLMs don't normally sound like they're talking to you. The LLM in its most basic state simply continues the text it's given. LLMs sound like they're talking to you when they're set up to always complete one side of a conversation.

Technically, there's nothing stopping you from making a Markov chain generator do this, too. Train it on prompts and responses delineated by a sentinel token, and then, during inference, you can make the starting state any given prompt followed by the sentinel, and it will autocomplete something that sounds like a fitting response.

This project explores that.
Now, practically, there are reasons Markov chain generators are not typically used this way: state size increases linearly with every extra word you want to be able to prompt the MCG with, and model size correspondingly increases exponentially. With just a few words and a decent sized corpus, you'll be running out of memory trying to load the whole thing.

This project is a naïve example of a Markov chain generator set up to respond to prompts, using an off-the-shelf library. It uses a state size of 3, enough to allow it to process just the first and last word of a prompt plus the sentinel token.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

conversational_markov-0.1.3.tar.gz (3.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

conversational_markov-0.1.3-py3-none-any.whl (4.0 kB view details)

Uploaded Python 3

File details

Details for the file conversational_markov-0.1.3.tar.gz.

File metadata

  • Download URL: conversational_markov-0.1.3.tar.gz
  • Upload date:
  • Size: 3.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for conversational_markov-0.1.3.tar.gz
Algorithm Hash digest
SHA256 aaf68a39d87875bb68229afea1a48dcd6ef328b062048509cf9bb6698e86c57b
MD5 6f6a5c96072eab342cb809af3a903b3c
BLAKE2b-256 b5040a622c4aca40e61d8924aafb9e4d4a6e658634635190df357310c789849d

See more details on using hashes here.

Provenance

The following attestation bundles were made for conversational_markov-0.1.3.tar.gz:

Publisher: python-publish.yml on garlic-os/conversational-markov

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file conversational_markov-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for conversational_markov-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a8e5f0825901966b6e51678716af8b5016efb65f2c798242d459ad22876afbab
MD5 c0aee25f14d83201c71a3a303652662a
BLAKE2b-256 13ddc1d0e719fbd06390c000885595ad7ebd6688de9fe3ba11c22ea4d8238040

See more details on using hashes here.

Provenance

The following attestation bundles were made for conversational_markov-0.1.3-py3-none-any.whl:

Publisher: python-publish.yml on garlic-os/conversational-markov

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page