Skip to main content

Command-line interface for a number of AI models

Project description

A (yet another) GNU Readline-based application for interacting with chat-oriented AI models.

Features

This application is designed with a focus on minimizing code size. As a Unix-style program, it can be used both as an interactive terminal with command completion or as a shebang script runner.

Supported model providers:

  • OpenAI via REST API. We tested the text gpt-4o model and the graphic dall-e-2 and dall-e-3 models.
  • GPT4All via Python bindings

The scripting language allows basic processing involving buffer variables and file manipulaitons. For advanced scripting, we suggest using text session management tools such as Expect or Litrepl (by the same author).

Contents

Install

The following installation options are available:

Stable release

You can install the stable release of the project using Pip, a default package manager for Python.

$ pip install sm_aicli

Latest or development version

Latest version using Pip

To install the latest version of sm_aicli directly from the GitHub repository, you can use Pip with the Git URL.

$ pip install git+https://github.com/sergei-mironov/aicli.git

Latest version using Nix

To install the latest version of aicli using Nix, you first need to clone the repository. Nix will automatically manage and bring in all necessary dependencies, ensuring a seamless installation experience.

$ git clone --depth=1 https://github.com/sergei-mironov/aicli && cd aicli
# Optionally, change the 'nixpkgs' input of the flake.nix to a more suitable
$ nix profile install ".#python-aicli"

Development shell

Set up a development environment using Nix to work on the project. Clone the repository and activate the development shell with the following commands:

$ git clone --depth=1 https://github.com/sergei-mironov/aicli && cd aicli
$ nix develop

Quick start

Below is a simple OpenAI terminal session. The commands start with /, while lines following # are ignored. Other text is collected into a buffer and is sent to the model by the /ask command. Please replace YOUR_API_KEY with your actual API key.

$ aicli
>>> /model openai:"gpt-4o"
>>> /set model apikey verbatim:YOUR_API_KEY # <--- Your OpenAI API key goes here
# Other option here is:
# /set model apikey file:"/path/to/your/openai/apikey"
>>> Tell me about monkeys
>>> /ask

Monkeys are fascinating primates that belong to two main groups: New World monkeys and
Old World monkeys. Here's a brief overview of ...

The last model answer is recorded into the out buffer. Let's print it again and save it to a file using the /cp command:

>>> /cat buffer:out
..
>>> /cp buffer:out file:monkey.txt

For the full list of commands, check the grammar reference below. Also, the ./ai folder of this repo contains example scripts.

Reference

Command-line reference

usage: aicli [-h] [--model-dir MODEL_DIR] [--image-dir IMAGE_DIR]
             [--model [STR1:]STR2] [--num-threads NUM_THREADS]
             [--model-apikey STR] [--model-temperature MODEL_TEMPERATURE]
             [--device DEVICE] [--readline-key-send READLINE_KEY_SEND]
             [--readline-prompt READLINE_PROMPT] [--readline-history FILE]
             [--verbose NUM] [--revision] [--version] [--rc RC] [-K]
             [filenames ...]

Command-line arguments

positional arguments:
  filenames             List of filenames to process

options:
  -h, --help            show this help message and exit
  --model-dir MODEL_DIR
                        Model directory to prepend to model file names
  --image-dir IMAGE_DIR
                        Directory in which to store images
  --model [STR1:]STR2, -m [STR1:]STR2
                        Model to use. STR1 is 'gpt4all' (the default) or
                        'openai'. STR2 is the model name
  --num-threads NUM_THREADS, -t NUM_THREADS
                        Number of threads to use
  --model-apikey STR    Model provider-specific API key
  --model-temperature MODEL_TEMPERATURE
                        Temperature parameter of the model
  --device DEVICE, -d DEVICE
                        Device to use for chatbot, e.g. gpu, amd, nvidia,
                        intel. Defaults to CPU
  --readline-key-send READLINE_KEY_SEND
                        Terminal code to treat as Ctrl+Enter (default: \C-k)
  --readline-prompt READLINE_PROMPT, -p READLINE_PROMPT
                        Input prompt (default: >>>)
  --readline-history FILE
                        History file name (default is '_sm_aicli_history'; set
                        empty to disable)
  --verbose NUM         Set the verbosity level 0-no,1-full
  --revision            Print the revision
  --version             Print the version
  --rc RC               List of config file names (','-separated, use empty or
                        'none' to disable)
  -K, --keep-running    Open interactive shell after processing all positional
                        arguments

Commands overview

Command Arguments Description
/append REF REF Append a file, a buffer or a constant to a file or to a buffer.
/cat REF Print a file or buffer to STDOUT.
/cd REF Change the current directory to the specified path
/clear Clear the buffer named ref_string.
/cp REF REF Copy a file, a buffer or a constant into a file or into a buffer.
/dbg Run the Python debugger
/echo Echo the following line to STDOUT
/exit Exit
/help Print help
/model PROVIDER:NAME Set the current model to model_string. Allocate the model on first use.
/paste BOOL Enable or disable paste mode.
/read WHERE Reads the content of the 'IN' buffer into a special variable.
/reset Reset the conversation and all the models
/set WHAT Set terminal or model option, check the Grammar for a full list of options.
/shell REF Run a system shell command.
/version Print version

Grammar reference

The console accepts a language defined by the following grammar:

start: (command | escape | text)? (command | escape | text)*
text: TEXT
escape: ESCAPE
# Commands start with `/`. Use `\/` to process next `/` as a regular text.
# The commands are:
command.1: /\/version/ | \
           /\/dbg/ | \
           /\/reset/ | \
           /\/echo/ | \
           /\/ask/ | \
           /\/help/ | \
           /\/exit/ | \
           /\/model/ / +/ model_ref | \
           /\/read/ / +/ /model/ / +/ /prompt/ | \
           /\/set/ / +/ (/model/ / +/ (/apikey/ / +/ ref | \
                                       (/t/ | /temp/) / +/ (FLOAT | DEF) | \
                                       (/nt/ | /nthreads/) / +/ (NUMBER | DEF) | \
                                       /imgsz/ / +/ string | \
                                       /verbosity/ / +/ (NUMBER | DEF)) | \
                             (/term/ | /terminal/) / +/ (/modality/ / +/ MODALITY | \
                                                         /rawbin/ / +/ BOOL | \
                                                         /prompt/ / +/ string | \
                                                         /width/ / +/ (NUMBER | DEF))) | \
           /\/cp/ / +/ ref / +/ ref | \
           /\/append/ / +/ ref / +/ ref | \
           /\/cat/ / +/ ref | \
           /\/clear/ / +/ ref | \
           /\/shell/ / +/ ref | \
           /\/cd/ / +/ ref | \
           /\/paste/ / +/ BOOL

# Strings can start and end with a double-quote. Unquoted strings should not contain spaces.
string:  "\"" "\"" | "\"" STRING_QUOTED "\"" | STRING_UNQUOTED

# Model references are strings with the provider prefix
model_ref: (PROVIDER ":")? string

# References mention locations which could be either a file (`file:path/to/file`), a binary file
# (`bfile:path/to/file`), a named memory buffer (`buffer:name`) or a read-only string constant
# (`verbatim:ABC`).
ref: (SCHEMA ":")? string -> ref | \
     /file/ (/\(/ | /\(/ / +/) ref (/\)/ | / +/ /\)/) -> ref_file

# Base token types
ESCAPE.5: /\\./
SCHEMA.4: /verbatim/|/file/|/bfile/|/buffer/
PROVIDER.4: /openai/|/gpt4all/|/dummy/
STRING_QUOTED.3: /[^"]+/
STRING_UNQUOTED.3: /[^"\(\)][^ \(\)\n]*/
TEXT.0: /([^#](?!\/))*[^\/#]/s
NUMBER: /[0-9]+/
FLOAT: /[0-9]+\.[0-9]*/
DEF: "default"
BOOL: /true/|/false/|/yes/|/no/|/on/|/off/|/1/|/0/
MODALITY: /img/ | /text/
%ignore /#[^\n]*/

By default, the application tries to read configuration files starting from the / directory down to the current directory. The contents of _aicli, .aicli, _sm_aicli and .sm_aicli files is interpreted as commands.

Architecture

Conversation | Utterance | Actor | Intention | Stream

In this project, we aim to keep the codebase as compact as possible. All data types are defined in a single file, types.py, while the rest of the project is dedicated to implementing algorithms. The Conversation abstraction plays a central role.

The main loop of the program manages Actors, who add utterances to the stack of existing ones. The entire design emulates Free Monad evaluation, with Utterance representing the Free Monad itself. Most of the monad constructors are represented as flags within the Intention part of the Utterance. By using these flags, an actor can request the introduction of additional actors into the conversation.

The user-facing terminal actor utilizes the same API to generate utterances during the interpretation of input language. The language parser is generated by the Lark library from a predefined grammar.

Each actor receives a read-only view of the Conversation, identifies the related Utterance, and then takes responsibility for decoding it into the appropriate third-party format, computing the response, and encoding it back into the Utterance. A popular choice is the {'role':'system'|'assistant'|'user', 'content': str} structure used by the OpenAI API.

Vim integration

Aicli is supported by the Litrepl text processor.

Peek 2024-07-19 00-11

Roadmap

  • Core functionality:

    • OpenAI graphic API models
    • Antropic API
    • OpenAI tooling API subset
    • Advanced scripting: functions
  • Usability:

    • Command completion in terminal.
    • /shell running a system shell command.
    • /set terminal width INT for limiting text width for better readability.
    • /set terminal prompt STR for setting readline command-line prompt.
    • /edit for running an editor.
    • /set model alias REF for setting a short name for a model.
    • Encode actor errors into the conversation.
    • Session save/load.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sm_aicli-2.1.1.tar.gz (30.9 kB view details)

Uploaded Source

Built Distribution

sm_aicli-2.1.1-py3-none-any.whl (26.7 kB view details)

Uploaded Python 3

File details

Details for the file sm_aicli-2.1.1.tar.gz.

File metadata

  • Download URL: sm_aicli-2.1.1.tar.gz
  • Upload date:
  • Size: 30.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.9

File hashes

Hashes for sm_aicli-2.1.1.tar.gz
Algorithm Hash digest
SHA256 a3ec3f3dfa690a211b5becd2b32cd4aa3d48272c0dbcc6d226fbf824c2bac7b3
MD5 0d1075c86ca6f39460b6ca10ce720b53
BLAKE2b-256 b09c008e3aaf9f844ef2a6a5c22ec082d5060c1938e31fdd2e340ce737f5d6b5

See more details on using hashes here.

File details

Details for the file sm_aicli-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: sm_aicli-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 26.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.9

File hashes

Hashes for sm_aicli-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cbc997783e8aeb5e190b84a1c541535e00e555ae784f2adcf4e358a632e09e19
MD5 bc145bf14a02fc03baf66ce527bb0290
BLAKE2b-256 a8928dc305c7ed196e0324e9b4ccf478b9906e1bf60ce8c80dd851ca5acaa4f2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page