A REPL for ChatGPT
Project description
LLM REPL
What is this?
The goal of this project is to create a simple, interactive REPL (Read-Eval-Print-Loop) that allows users to interact with a variety of Large Language Models (LLMs). The project is mainly built on top of two Python libraries: langchain, which provides a convenient and flexible interface for working with LLMs, and rich which provides a user-friendly interface for the REPL.
Currently, the project is in development and only supports interaction with the ChatGPT but it has been structure to make it easy to extend it use any LLMs, including custom ones (by extending BaseLLM
in ./src/llm_repl/llms/__init__.py
).
ChatGPT can be interacted by using the models gpt-3.5-turbo
and gpt4
(For users who got GPT-4 API beta).
Features
The REPL supports the following features:
Streaming Mode
The REPL won't wait for the model to finish generating the output, but it will start printing the output as soon as it is available.
Conversation Memory
The REPL supports conversation memory. This means that the model will remember the previous conversation and will use it to generate the next response.
Pretty Printing
The REPL supports Markdown rendering both of the input and the output.
PS: In this initial version of the REPL, the full Markdown syntax is only when running the tool in non-streaming
mode. In streaming
mode only code sections will be pretty printed.
Model Switching
The REPL supports the switching between different models. At the moment, the only supported LLMs are chatgpt
and chatgpt4
.
COMING SOON...
Installation
pip install llm-repl
Usage
First export your OpenAI API key as an environment variable:
export OPENAI_API_KEY=<OPENAI_KEY>
Then run the REPL:
llm-repl
Or if you want to use a specific model:
llm-repl --llm chatgpt4
Run inside Docker
docker run -it --rm -e OPENAI_API_KEY=<OPENAI_KEY> phate/llm-repl
Or if you want to source the environment variables from a file, first create a file called .env
with the following content:
OPENAI_API_KEY=<OPENAI_KEY>
And then run the following command:
docker run -it --rm --env-file .env phate/llm-repl
Development
To install the REPL in development mode
Then install the package in development mode:
pip install -e ".[DEV]"
Before contributing, please make sure to run the following commands:
pre-commit install
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm-repl-0.0.1.tar.gz
.
File metadata
- Download URL: llm-repl-0.0.1.tar.gz
- Upload date:
- Size: 9.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 78b2a0057fbb75ab7c35e2d7103a0f61a8204af1cb56b0d9049f6ee840934969 |
|
MD5 | a84057670690fea7792fb3bd23c761f9 |
|
BLAKE2b-256 | 69e647f6fa1800f5d5845b00a725cbcd9e7d3ca909e3dce7ad5f5c5ed66e1dce |
File details
Details for the file llm_repl-0.0.1-py3-none-any.whl
.
File metadata
- Download URL: llm_repl-0.0.1-py3-none-any.whl
- Upload date:
- Size: 10.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 827953c85ff927fb547156bac438c45085dd47b5d23138745c03a7fe4acae0ab |
|
MD5 | 5d129504fa8e64deaaae4e777cbdda40 |
|
BLAKE2b-256 | 2d1d64e7449b9aaec2d265a75920fd7061241347ffeacdd0fd308f270f924c5f |