Skip to main content

Converse with LLMs directly in markdown files

Reason this release was yanked:

Pre-release development version

Project description

Prapti: converse with LLMs directly in markdown files

Hello! Welcome to Prapti. We're just getting started and we'd love for you to join the conversation. Please use GitHub Discussions general queries and suggestions, Issues for bug reports. Our Pull Requests are open.

Prapti is a tool for prompting Large Language Models (LLMs) with an editable history. You work within a markdown file where you can edit your prompts and the LLM's responses. Press a hot-key to get the next response from the LLM. Once received, the response is automatically appended to the file.

Features

  • Work in your favourite editor
  • Markdown files are the native conversation format (headings with special text delimit message boundaries)
  • Easily edit the whole conversation history/context window, including previous LLM outputs
  • Inline configuration syntax: easily change LLM parameters and switch language models ("responders") with each round of the conversation
  • Extensible with plugins (responders, commands and processing hooks)
  • Support for OpenAI, GPT4All (experimental), or add your favourite LLM back-end by implementing a Responder plugin.

Installation and Setup

Prapti requires Python 3.10 or newer.

Installation involves the following required steps:

  1. Install the prapti command line tool
  2. Set up your OpenAI API key (or configure a local LLM: see docs)
  3. Check that the prapti tool runs manually in your terminal
  4. Set up a keybinding to run prapti in your editor

1. Install the prapti command line tool

In your terminal, run:

pip install git+https://github.com/prapti-ai/prapti

(or pip3 or py -3 -m pip depending on your system).

We recommend running prapti in a Python virtual environment such as venv.

2. Set up your OpenAI API key (or configure a local LLM)

export OPENAI_API_KEY=your-key-goes-here

(or setx, depending on your system. There are other ways to manage your environment variables. For details see OpenAI's Best Practices for API Key Safety ).

To select a non-default organisation, use the OPENAI_ORGANIZATION environment variable.

To use an alternate OpenAI API key and/or organization specifically with prapti you can set PRAPTI_OPENAI_API_KEY and/or PRAPTI_OPENAI_ORGANIZATION. If set, prapti will use these environment variables in preference to OPENAI_API_KEY and OPENAI_ORGANIZATION.

Click here for local LLM configuration

3. Check that the prapti tool runs manually in your terminal

First, create a test markdown file with .md extension or work with an existing markdown file such as chat-example.md

Edit the file with a trailing user prompt like this:

### @user:

Write your prompt here.

Then run the script to generate a new assistant response:

prapti chat-example.md

4. Set up a keybinding to run prapti in your editor

Bind the key combination Ctrl-Enter to save the file and then run prapti (only when editing markdown files).

Below are the instructions for VSCode. If you use another editor please contribute instructions.

VSCode instructions

NOTE: This key binding runs prapti in the active VSCode terminal window. So make sure you have the terminal open with the prapti command available.

First use the Quick Open menu (Cmd-Shift-P on Mac, Ctrl-Shift-P on Windows) to run:

Preferences: Open Keyboard Shortcuts (JSON)

then add the following binding to the opened keybindings.json file.

{
    "key": "ctrl+enter",
    "command": "runCommands",
    "args":{
        "commands":[
            "workbench.action.files.save",
            "cursorBottom",
            {
                "command": "workbench.action.terminal.sendSequence",
                "args": { "text": "prapti ${file}\u000D" }
            },
            "cursorBottom",
        ]
    },
    "when": "editorLangId == markdown"
},

Now, when editing your markdown file you should be able to hit Ctrl-Enter to get a response from the LLM. You can watch the terminal window for progress. Be patient, GPT4 can take 30 seconds to generate a full response.

See our documentation for setting up syntax highlighting and chat-aware markdown display.

License

This project is MIT licensed.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prapti-0.0.1.tar.gz (27.6 kB view details)

Uploaded Source

Built Distribution

prapti-0.0.1-py3-none-any.whl (36.9 kB view details)

Uploaded Python 3

File details

Details for the file prapti-0.0.1.tar.gz.

File metadata

  • Download URL: prapti-0.0.1.tar.gz
  • Upload date:
  • Size: 27.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.7

File hashes

Hashes for prapti-0.0.1.tar.gz
Algorithm Hash digest
SHA256 edd31d1609c3dd407dfa8b8f8ea6e9ea4b765d7cecdc987d7b7a011a23790456
MD5 3a49e575a10beb7444ef9f2e153f75b7
BLAKE2b-256 fe38d6e450e1856cc04d9b38e98cea72f82012e7131be548b7cd560e0eb12cf5

See more details on using hashes here.

File details

Details for the file prapti-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: prapti-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 36.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.7

File hashes

Hashes for prapti-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e6b80ee9f542f140e3dc6c270e2015199cbe089f7b0afbc98ecc0c2af396ff5c
MD5 af1b6d47db476fc4768624fc0390bb0d
BLAKE2b-256 9568a5e39fb49c9f79a6143bfbb4416741c6a86501289c49ea99c020bb568476

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page