Skip to main content

A REPL for ChatGPT

Project description

LLM REPL

What is this?

The goal of this project is to create a simple, interactive REPL (Read-Eval-Print-Loop) that allows users to interact with a variety of Large Language Models (LLMs). The project is mainly built on top of two Python libraries: langchain, which provides a convenient and flexible interface for working with LLMs, and rich which provides a user-friendly interface for the REPL. The REPL can also be launched in headledss mode and it can be interacted with using websocket.

Currently, the project is in development and only supports interaction with the ChatGPT but it has been structure to make it easy to extend it use any LLMs, including custom ones (by extending BaseLLM in ./src/llm_repl/llms/__init__.py).

ChatGPT can be interacted by using the models gpt-3.5-turbo and gpt4 (For users who got GPT-4 API beta).

Features

The REPL supports the following features:

Streaming Mode

The REPL won't wait for the model to finish generating the output, but it will start printing the output as soon as it is available.

Streaming Mode

Conversation Memory

The REPL supports conversation memory. This means that the model will remember the previous conversation and will use it to generate the next response.

Memory

Pretty Printing

The REPL supports Markdown rendering both of the input and the output.

PS: In this initial version of the REPL, the full Markdown syntax is only when running the tool in non-streaming mode. In streaming mode only code sections will be pretty printed.

Pretty Printing

Headless Mode

The REPL can be run in headless mode. This means that it can be interacted with using a websocket. This is useful for integrating the REPL with other applications / other UIs.

To launch the REPL in headless mode, run the following command:

llm-repl --repl websocket --port <PORT>

Model Switching on the Fly

COMING SOON...

Conversation History

COMING SOON...

Installation

pip install llm-repl

Usage

First export your OpenAI API key as an environment variable:

export OPENAI_API_KEY=<OPENAI_KEY>

Then run the REPL:

llm-repl

Or if you want to use a specific model:

llm-repl --llm chatgpt4

Run inside Docker

docker run -it --rm -e OPENAI_API_KEY=<OPENAI_KEY> phate/llm-repl

Or if you want to source the environment variables from a file, first create a file called .env with the following content:

OPENAI_API_KEY=<OPENAI_KEY>

And then run the following command:

docker run -it --rm --env-file .env phate/llm-repl

Development

To install the REPL in development mode

Then install the package in development mode:

pip install -e ".[DEV]"

Before contributing, please make sure to run the following commands:

pre-commit install

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-repl-1.0.1.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

llm_repl-1.0.1-py3-none-any.whl (16.0 kB view details)

Uploaded Python 3

File details

Details for the file llm-repl-1.0.1.tar.gz.

File metadata

  • Download URL: llm-repl-1.0.1.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for llm-repl-1.0.1.tar.gz
Algorithm Hash digest
SHA256 4d1800a7ee6464912039321cb16f5a0c2579cdd0d04f5b7928457821c5f01d30
MD5 713e493fffd2d85226e07d6a2c4db4ba
BLAKE2b-256 cb2a0b2c5bcbf41db2405597ca6642d087ce9814cdefb30bb4afad360797e1f1

See more details on using hashes here.

File details

Details for the file llm_repl-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: llm_repl-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 16.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for llm_repl-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cf9be7156367fa1205992c0ed4c24ddff286ff87eb3da3c41a7ce71c465988b5
MD5 6aed9f7e25f83432d5611adb49185404
BLAKE2b-256 9ac2e93651b4a682c6a4d3ff85f14da6a474a07d976a7dfcc6c4beaeb4c2c4cb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page