Command-line interface using GPT4ALL bindings
Project description
A simple GNU Readline-based application for interaction with chat-oriented AI models using GPT4All Python bindings.
Contents
Install
The following installation options are available:
Pip
$ pip install git+https://github.com/sergei-mironov/gpt4all-cli.git
Note: pip install gpt4all-cli
might also work, but the git+https
method would bring the most
recent version.
Nix
$ git clone --depth=1 https://github.com/sergei-mironov/gpt4all-cli && cd gpt4all-cli
# Optionally, change the 'nixpkgs' input of the flake.nix to a more suitable
$ nix profile install ".#python-gpt4all-cli"
Usage
usage: gpt4all-cli [-h] [--model-dir MODEL_DIR] [--model [STR1:]STR2]
[--num-threads NUM_THREADS] [--model-apikey STR]
[--model-temperature MODEL_TEMPERATURE] [--device DEVICE]
[--readline-key-send READLINE_KEY_SEND]
[--readline-prompt READLINE_PROMPT]
[--readline-history FILE] [--verbose NUM] [--revision]
Command-line arguments
options:
-h, --help show this help message and exit
--model-dir MODEL_DIR
Model directory to prepend to model file names
--model [STR1:]STR2, -m [STR1:]STR2
Model to use. STR1 is 'gpt4all' (the default) or
'openai'. STR2 is the model name
--num-threads NUM_THREADS, -t NUM_THREADS
Number of threads to use
--model-apikey STR Model provider-specific API key
--model-temperature MODEL_TEMPERATURE
Temperature parameter of the model
--device DEVICE, -d DEVICE
Device to use for chatbot, e.g. gpu, amd, nvidia,
intel. Defaults to CPU
--readline-key-send READLINE_KEY_SEND
Terminal code to treat as Ctrl+Enter (default: \C-k)
--readline-prompt READLINE_PROMPT
Input prompt (default: >>>)
--readline-history FILE
History file name (default is '_gpt4all_cli_history';
set empty to disable)
--verbose NUM Set the verbosity level 0-no,1-full
--revision Print the revision
The console accepts language defined by the following grammar:
start: (command | escape | text)? (command | escape | text)*
escape.3: /\\./
command.2: /\/ask|\/exit|\/help|\/reset/ | \
/\/model/ / +/ string | \
/\/apikey/ / +/ string | \
/\/nthreads/ / +/ (number | def) | \
/\/verbose/ / +/ (number | def) | \
/\/temp/ / +/ (float | def ) | \
/\/echo/ | /\/echo/ / /
string: /"[^\"]+"/ | /""/
number: /[0-9]+/
float: /[0-9]+\.[0-9]*/
def: "default"
text: /(.(?!\/|\\))*./s
Example session
$ gpt4all-cli
Type /help or a question followed by the /ask command (or by pressing `C-k` key).
>>> /model "~/.local/share/nomic.ai/GPT4All/Meta-Llama-3-8B-Instruct.Q4_0.gguf"
>>> Hi!
>>> /ask
Hello! I'm happy to help you. What's on your mind?^C
>>> What's your name?
>>> /ask
I don't really have a personal name, but you can call me "Assistant"
Vim integration
Gpt4all-cli is supported by the Litrepl text processor.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sm_aicli-1.4.0.tar.gz
(10.6 kB
view details)
Built Distribution
sm_aicli-1.4.0-py3-none-any.whl
(10.0 kB
view details)
File details
Details for the file sm_aicli-1.4.0.tar.gz
.
File metadata
- Download URL: sm_aicli-1.4.0.tar.gz
- Upload date:
- Size: 10.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 862fff49282f72cd6ca202cf2c017863d6541293b2b38b5ab6941776ffc140c2 |
|
MD5 | 8084e347970972666da3d23ad6429c50 |
|
BLAKE2b-256 | c4aeb4fb0a55bd508e4d8d557a0b200209cddb3addc1a4ca3e1dc16a9dd5e0d4 |
File details
Details for the file sm_aicli-1.4.0-py3-none-any.whl
.
File metadata
- Download URL: sm_aicli-1.4.0-py3-none-any.whl
- Upload date:
- Size: 10.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 65b90b034870da0fab0380a5a5bf0e655b71bfe2386d3f040692f83c08947fc9 |
|
MD5 | 5daab1c0dc54a62f4bf1c180dc7dc2cf |
|
BLAKE2b-256 | 032dfe9084d67f7d769e329c6f69b2ac366d632d0ee8cba3f65125e34df8729a |