Skip to main content

Markov chain generator with rudimentary prompt response

Project description

Conversational Markov

LLMs don't normally sound like they're talking to you. The LLM in its most basic state simply continues the text it's given. LLMs sound like they're talking to you when they're set up to always complete one side of a conversation.

Technically, there's nothing stopping you from making a Markov chain generator do this, too. Train it on prompts and responses delineated by a sentinel token, and then, during inference, you can make the starting state any given prompt followed by the sentinel, and it will autocomplete something that sounds like a fitting response.

This project explores that.
Now, practically, there are reasons Markov chain generators are not typically used this way: state size increases linearly with every extra word you want to be able to prompt the MCG with, and model size correspondingly increases exponentially. With just a few words and a decent sized corpus, you'll be running out of memory trying to load the whole thing.

This project is a naïve example of a Markov chain generator set up to respond to prompts, using an off-the-shelf library. It uses a state size of 3, enough to allow it to process just the first and last word of a prompt plus the sentinel token.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

conversational_markov-0.1.1.tar.gz (3.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

conversational_markov-0.1.1-py3-none-any.whl (4.3 kB view details)

Uploaded Python 3

File details

Details for the file conversational_markov-0.1.1.tar.gz.

File metadata

  • Download URL: conversational_markov-0.1.1.tar.gz
  • Upload date:
  • Size: 3.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for conversational_markov-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a9753597f0fb99aa45b20b2cbaf653e20f19bfa6e03f27a0de9c947b6a186fbe
MD5 27b02d6a38b720c4098711f0b207b265
BLAKE2b-256 48688150a97060a4d5386bfe92e8719454e9f43b27dffd4c8e26eb9e68b242a4

See more details on using hashes here.

Provenance

The following attestation bundles were made for conversational_markov-0.1.1.tar.gz:

Publisher: python-publish.yml on garlic-os/conversational-markov

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file conversational_markov-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for conversational_markov-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a34815b1f40401f5c49936973ae05d09f444c52a381cc26d370ab9bdafad2a0a
MD5 c43f5d2e5ade218eec76700f6f20b799
BLAKE2b-256 28673b4dd6c73a427a6741d8d12566f8a840d6412e9ecfbb6bcf2ea7f251b195

See more details on using hashes here.

Provenance

The following attestation bundles were made for conversational_markov-0.1.1-py3-none-any.whl:

Publisher: python-publish.yml on garlic-os/conversational-markov

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page