Skip to main content

A GPT conversational TUI tool that runs within the terminal.

Project description

GPTUI

English readme简体中文 readme

gptui_logo GPTUI is a GPT conversational TUI (Textual User Interface) tool that runs within the terminal. Using the Textual framework for its TUI interface and equipping the plugin framework provided by Semantic Kernel. GPTUI offers a lightweight Kernel to power AI applications. The top-level TUI application is decoupled from the underlying Kernel, allowing you to easily replace the TUI interface or expand its functionalities. At present, only the GPT model of OpenAI is supported, and other LLM interfaces will be added later.

  gptui_demo

TUI Features

  • Create and manage conversations with GPT.
  • Display context tokens in real-time.
  • View and adjust GPT conversation parameters at any time, such as temperature, top_p, presence_penalty, etc.
  • A dedicated channel to display internal process calls.
  • Offers a file channel through which you can upload to or download from GPT.
  • Optional plugin features, including (customizable, continuously being added and refined, some plugin prompts are still under development):
    • Internet search.
    • Open interpreter.
    • Reminders.
    • Recollecting memories from vectorized conversation history.

Compatibility

GPTUI runs in a command line environment and is compatible with Linux, macOS, Android, and of course Windows (I haven't tested it yet!). Using the functionality provided by textual-web, you can also run GPTUI in the browser and share it with remote friends.

⚙️ GPTUI Kernel

GPTUI offers a lightweight Kernel for building AI applications, allowing you to easily expand GPTUI's capabilities or construct your own AI application.

gptui-framework

The kernel relies on jobs and handlers to perform specific functions. To achieve new functionalities, all you need to do is write or combine your own jobs and handlers. The manager and kernel of GPTUI are entirely independent of the client application, enabling you to effortlessly relocate the manager or kernel for use elsewhere. The application layer of GPTUI (client) employs the CVM architecture, where the model layer provides foundational, reusable modules for interaction with LLM, independent of specific views and controllers implementations. If you wish to build your own AI application, you can start here, fully utilizing the kernel, manager, and models. To alter or expand UI functionalities, typically, only modifications to the controllers and views are needed.

See Development Documentation for details. Documentation.

Installation

Normal use requires ensuring stable network connection to OpenAI. If you encounter any issues, please refer to troubleshooting.

Install with pip

pip install gptui

Config your API keys before running.

To run:

gptui

Specify config file:

gptui --config <your_config_file_path>

This program loads files through the following steps:

  1. Read the configuration file from --config. If not specified, proceed to the next step.
  2. Search for ~/.gitui/.config.yml in the user directory. If not found, move to the next step.
  3. Copy the default configuration file gptui/config.yml to ~/.gitui/.config.yml and use it.

Install from source

git clone https://github.com/happyapplehorse/gptui.git
cd gptui
pip install .

API configuration is required before running.

To run:

gptui
# Or you can also use
# python -m gptui

You can also directly run the startup script (this allows you to modify the source code and run it immediately): First, install the dependencies:

pip install -r requirements.txt

Then, run the startup script:

python main.py

When running the program with python main.py or python -m gptui, use gptui/config.yml as the configuration file.

On Linux or macOS systems, if you want to use voice and TTS (TextToSpeak) functionalities, you'll need to install pyaudio and espeak separately (only this method is provided for now, and the performance is not very good).

Configuration

Config API keys

Configure the corresponding API Keys in ~/.gptui/.env_gptui. Refer to the .env_gptui.example file. When using the "WebServe" plugin, GOOGLE_KEY and GOOGLE_CX need to be provided, which can be obtained free of charge from Google.

Config File

See ./config.yml for a config file example that lists all configurable options. Depending on the platform you are using, it is best to configure the following options:

  • os: system platform

Otherwise, some features may not work properly, such as code copy and voice related functions.

Quick Start

Interface Layout

gptui-layout

  • chat area: Display area for chat content.
  • status area: Program status display area, displaying response animations and notifications.
  • input area: Chat content input area.
  • auxiliary area: Auxiliary information area, displaying "internal communication" between the program and the LLM, including function call information, etc.
  • control area: The program's control area, where you can view and set the state of the program, such as change OpenAI chat parameters.
  • chat tabs: Conversation Tab Bar.
  • conversation control: Conversation control buttons. From top to bottom they are:
    • +: New conversation
    • >: Save conversation
    • <: Load conversation
    • -: Delete conversation
    • x: Delete conversation file
    • n: Disposable conversation
    • : Upload file
  • panel selector: Panel selection area. From top to bottom they are:
    • C: Conversation file records
    • D: System file tree
    • A: Auxiliary information panel
    • T: File pipeline panel
    • P: Plugin selection panel
  • switches:Direct control switches. From left to right they are:
    • R: Program state auto save and restore switch
    • V: Voice switch
    • S: Read reply by voice
    • F: Fold files in chat
    • |Exit|: Exit program
  • dashboard:Context window size for chat.
  • others:
    • <: Previous chat
    • >: Next chat
    • 1: Number of chats
    • : [Running status](#Running status)
    • : Fold right non-chat area
    • ?: Help documentation

Running status

: Ready.
: Task running.

Dynamic Commands

Switch to S in the control area, enter the command and press enter. Currently supports the following commands:

  • Set chat parameters Command: set_chat_parameters() Parameters: OpenAI chat parameters in dictionary form, refer to OpenAI Chat. Example: set_chat_parameters({"model": "gpt-4", "stream": True})
  • Set max sending tokens ratio Command: set_max_sending_tokens_ratio() Parameters: The ratio of the number of sent tokens to the total token window, in float form. The remaining token count is used as the limit for the number of tokens GPT returns. Example: set_max_sending_tokens_ratio(0.5)

Hotkeys

GPTUI provides hotkeys for commonly used features, see Help. In addition, you can also press ESC or ctrl+[ to bring up the hotkey menu (this type of shortcut keys is not completely consistent with the direct hotkeys!).

Documentation

For detailed instructions, see here, for in-program help documentation see here, for further development, see here.

Contribution

Some of GPTUI's plugin features rely on prompt, you can continue to help me improve these prompt. And I'd like to have appropriate animation cues during certain state changes. If you have any creative ideas, I'd appreciate your help in implementing them. P.S.: Each contributor can leave a quote in the program.

License

GPTUI is built upon a multitude of outstanding open-source components and adheres to the MIT License open-source agreement. You are free to use it.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptui-0.1.0.tar.gz (119.6 kB view details)

Uploaded Source

Built Distribution

gptui-0.1.0-py3-none-any.whl (141.9 kB view details)

Uploaded Python 3

File details

Details for the file gptui-0.1.0.tar.gz.

File metadata

  • Download URL: gptui-0.1.0.tar.gz
  • Upload date:
  • Size: 119.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for gptui-0.1.0.tar.gz
Algorithm Hash digest
SHA256 7579459d19712c9724116a74b304a0e075c6bb828db5464d2edf6676b909ffae
MD5 85eb25152ea0d76ef06bebf849517af7
BLAKE2b-256 9eaecb2fa7a19c00db813bc7c09c15c6b595b50942bf5099e9906a2001a5b384

See more details on using hashes here.

File details

Details for the file gptui-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: gptui-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 141.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for gptui-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dab759f8424d2cfd5a0abe73e6431439b9f37439a9347c15a46001207cfd65d9
MD5 7c2d4ad5677c6fdea7e5c8d42f84fdcd
BLAKE2b-256 8602ee4ac392e3ec0d28b606381cfca7630d3a89e203ed7235d20d116dcef58d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page