Skip to main content

A REPL for ChatGPT

Project description

LLM REPL

What is this?

The goal of this project is to create a simple, interactive REPL (Read-Eval-Print-Loop) that allows users to interact with a variety of Large Language Models (LLMs). The project is mainly built on top of two Python libraries: langchain, which provides a convenient and flexible interface for working with LLMs, and rich which provides a user-friendly interface for the REPL. The REPL can also be launched in headledss mode and it can be interacted with using websocket.

Currently, the project is in development and only supports interaction with the ChatGPT but it has been structure to make it easy to extend it use any LLMs, including custom ones (by extending BaseLLM in ./src/llm_repl/llms/__init__.py).

ChatGPT can be interacted by using the models gpt-3.5-turbo and gpt4 (For users who got GPT-4 API beta).

Features

The REPL supports the following features:

Streaming Mode

The REPL won't wait for the model to finish generating the output, but it will start printing the output as soon as it is available.

Streaming Mode

Conversation Memory

The REPL supports conversation memory. This means that the model will remember the previous conversation and will use it to generate the next response.

Memory

Pretty Printing

The REPL supports Markdown rendering both of the input and the output.

PS: In this initial version of the REPL, the full Markdown syntax is only when running the tool in non-streaming mode. In streaming mode only code sections will be pretty printed.

Pretty Printing

Headless Mode

The REPL can be run in headless mode. This means that it can be interacted with using a websocket. This is useful for integrating the REPL with other applications / other UIs.

To launch the REPL in headless mode, run the following command:

llm-repl --repl websocket --port <PORT>

Model Switching on the Fly

COMING SOON...

Conversation History

COMING SOON...

Installation

pip install llm-repl

Usage

First export your OpenAI API key as an environment variable:

export OPENAI_API_KEY=<OPENAI_KEY>

Then run the REPL:

llm-repl

Or if you want to use a specific model:

llm-repl --llm chatgpt4

Run inside Docker

docker run -it --rm -e OPENAI_API_KEY=<OPENAI_KEY> phate/llm-repl

Or if you want to source the environment variables from a file, first create a file called .env with the following content:

OPENAI_API_KEY=<OPENAI_KEY>

And then run the following command:

docker run -it --rm --env-file .env phate/llm-repl

Development

To install the REPL in development mode

Then install the package in development mode:

pip install -e ".[DEV]"

Before contributing, please make sure to run the following commands:

pre-commit install

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-repl-0.0.2.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

llm_repl-0.0.2-py3-none-any.whl (15.6 kB view details)

Uploaded Python 3

File details

Details for the file llm-repl-0.0.2.tar.gz.

File metadata

  • Download URL: llm-repl-0.0.2.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for llm-repl-0.0.2.tar.gz
Algorithm Hash digest
SHA256 07ffb1027424e6653338e974485c849f7b9c28d1d604c62612a1b953930a5717
MD5 7a2d7d0b0fadf60e3589f40aadf342c2
BLAKE2b-256 e478b801396164a8f2500b262f99f6a913349297904c0cc3b776455d66a3ed64

See more details on using hashes here.

File details

Details for the file llm_repl-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: llm_repl-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 15.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for llm_repl-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2efb3d103a16bceb479b13d6bb1301ef76df51cd04c71b12993be5025146e53c
MD5 b68a463f9aa6324efdc3abe1c9966f6a
BLAKE2b-256 a1b7bf8ed1d2c96209546cdf028fccf8c69ed15dfb7ceab869f41556eef265c4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page