A CLI tool for accessing OpenAI and Mistral AI LLMs
Project description
GPTCLI
GPTCLI is a CLI client written entirely in Python for accessing the LLM of your choice without the need for Web or Desktop apps. All data is encrypted at rest using AES-256-GCM.
How to run it
Summary:
- PyPI:
- Install with
pip install dbc-gptcli. - Run
gptcli [mistral|openai] [chat|se|ocr].
- Install with
- Docker:
- Pull with
docker pull deathbychocolate/gptcli:latest. - Start a container with
docker run --rm -it --entrypoint /bin/bash deathbychocolate/gptcli:latest. - Run
gptcli [mistral|openai] [chat|se|ocr].
- Pull with
For more info on usage, check the builtin help docs with:
gptcli -hgptcli [mistral|openai] [chat|se|ocr] -h.
Command tree
gptcli [--no-cache] [--loglevel LEVEL]
├── all
│ ├── encrypt # Encrypt all cleartext files
│ ├── decrypt # Decrypt all encrypted files
│ ├── rekey # Re-encrypt with a new passphrase
│ └── nuke # Permanently delete all gptcli data
├── mistral
│ ├── chat # Multi-turn conversation
│ ├── se # Single exchange
│ ├── ocr # Document to Markdown conversion
│ └── search
│ ├── chat # Full-text search over chat history
│ └── ocr # Full-text search over OCR history
└── openai
├── chat # Multi-turn conversation
├── se # Single exchange
└── search
└── chat # Full-text search over chat history
How to get an API key
You need valid API keys to communicate with the AI models.
For OpenAI:
- Create an OpenAI account here: https://chat.openai.com/
- Generate an OpenAI API key here: https://platform.openai.com/api-keys
For Mistral AI:
- Create a Mistral AI account here: https://chat.mistral.ai/chat
- Generate a Mistral AI API key here: https://console.mistral.ai/api-keys
How it works
The project uses the API of LLM providers to perform chat completions. It does so by sending message objects converted to JSON payloads and sent over HTTPS POST requests.
GPTCLI facilitates access to 2 LLM providers, Mistral AI and OpenAI. Each provider offers modes to communicate with the LLM of your choosing: Chat, Single-Exchange, and OCR (Mistral only).
Modes
Chat
Chat mode allows the user to have a conversation that is similar to ChatGPT by creating a MESSAGE-REPLY thread. For example, you say hello:
You can have a conversation multiline conversations:
You can load the last conversation you had with the LLM provider (OpenAI, Mistral):
And if you want to know about in-chat commands, you can view them by asking for help:
Chat mode also automatically:
- Stores chats locally as oneline
jsonfiles via the--storeand--no-storeflags. - Uses previously sent messages as context via the
--contextand--no-contextflags. - Loads the provider's API key; you may overwrite this behaviour by providing a different key with the
--keyflag.
Single-Exchange (SE)
Single-Exchange is functionally similar to chat mode, but it only allows one exchange of messages to happen (1 message sent from client-side and 1 response message from server-side) and then exit. This encourages loading all the context and instructions in one message. It is also more suitable for automating multiple calls to the API with different payloads, and flags. This mode will show you output similar to the following:
This mode does not store chats locally. It is expected the user implements their own solution via piping or similar.
OCR (Optical Character Recognition)
OCR mode converts documents (PDFs, images) into Markdown text. Currently available for Mistral AI only. It accepts local filepaths and/or URLs as arguments, or a batch of documents via --filelist. By default, results are saved as Markdown files in the current directory.
OCR mode also automatically:
- Stores OCR results locally via the
--storeand--no-storeflags. - Saves converted Markdown files to the current directory; you may change this with
--output-diror disable it with--no-output-dir. - Supports batch processing from a file of paths/URLs via the
--filelistflag. - Displays the Markdown result to stdout via the
--displayflag. - Displays the most recent OCR session from storage via the
--display-lastflag. - Excludes images from the OCR response via the
--no-imagesflag, returning only Markdown text.
Search (Full-Text Search)
Search mode provides an interactive TUI for full-text search over locally stored history. It is available for both providers under chat, and for Mistral AI also under ocr.
gptcli mistral search chat # Search Mistral chat history
gptcli openai search chat # Search OpenAI chat history
gptcli mistral search ocr # Search Mistral OCR history
Type to filter results in real time. Chat search navigation and actions:
| Key | Action |
|---|---|
↑ / ↓ |
Move selection |
PgUp / PgDn |
Page through results |
Enter |
Load session into chat |
Ctrl+P |
Print session to stdout |
Ctrl+U |
Clear the search query |
Esc |
Quit |
OCR search navigation and actions:
| Key | Action |
|---|---|
↑ / ↓ |
Move selection |
PgUp / PgDn |
Page through results |
Enter |
Print result to stdout |
Ctrl+W |
Write result to a file |
Ctrl+U |
Clear the search query |
Esc |
Quit |
For OCR search, the output directory for Ctrl+W can be set with --output-dir (defaults to .).
Encryption
GPTCLI encrypts all data at rest using AES-256-GCM with scrypt key derivation. On first run, you are prompted to create a passphrase (16 characters minimum). The derived encryption key is cached for 12 hours using a wrapping key in volatile storage, so you don't need to re-enter your passphrase on every invocation.
You can manage encryption across all providers with:
gptcli all encrypt— Encrypt all cleartext files.gptcli all decrypt— Decrypt all encrypted files.gptcli all rekey— Re-encrypt all files with a new passphrase.
Use the --no-cache flag to disable key caching and prompt for the passphrase every time.
Features
Implemented
- Send text based messages to Mistral AI API.
- Send text based messages to OpenAI API.
- Store API keys locally.
- Allow context retention for chats with all providers.
- Allow streaming of text based messages for all providers.
- Allow storage of chats locally for all providers.
- Allow loading of chats from local storage as context for all providers.
- Add in-chat commands.
- Add multiline option for chat mode.
- Add spinner animation for chat mode.
- Add OCR as a new mode.
- Send OCR queries for images and PDF documents to Mistral AI API.
- Allow storage of OCR results locally.
- Add encryption at rest for all locally stored data.
- Add key caching with automatic expiry.
- Add passphrase rekeying.
- Add FTS for chats in storage.
- Add FTS for OCR results in storage.
- Add nuke command to permanently delete all gptcli data.
In Development
- Send OCR queries for images and PDF documents to OpenAI API.
- Add role-based messages for Mistral AI:
usersystemassistantdevelopertoolfunction - Add role-based messages for OpenAI:
usersystemassistantdevelopertoolfunction
Lexicon
| Abbreviation | Definition |
|---|---|
| OCR | Optical Character Recognition |
| SE | Single-Exchange |
| FTS | Full Text Search |
How GPTCLI is different from other clients
- GPTCLI does not use any software developed by OpenAI or Mistral AI, except for counting tokens.
- GPTCLI prioritizes features that make the CLI useful and easy to use.
- GPTCLI aims to eventually have all the features of its WebApp counterparts in the terminal.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dbc_gptcli-0.26.0.tar.gz.
File metadata
- Download URL: dbc_gptcli-0.26.0.tar.gz
- Upload date:
- Size: 117.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70fb78b0d90a4457e8a1b0d33863a1c1e881a63d6aaf758a9e0c068eed89e7e4
|
|
| MD5 |
2d5814aa9c6b97039a38702aeab98aaa
|
|
| BLAKE2b-256 |
2a8f5675967df1e81e0d8253dad59147f52499ff6b42eeca53be6b836bdb8f15
|
Provenance
The following attestation bundles were made for dbc_gptcli-0.26.0.tar.gz:
Publisher:
release.yml on deathbychocolate/gptcli
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dbc_gptcli-0.26.0.tar.gz -
Subject digest:
70fb78b0d90a4457e8a1b0d33863a1c1e881a63d6aaf758a9e0c068eed89e7e4 - Sigstore transparency entry: 1110877749
- Sigstore integration time:
-
Permalink:
deathbychocolate/gptcli@a58c65dda34569dc77b25dfb9e850eee327c1ffd -
Branch / Tag:
refs/heads/main - Owner: https://github.com/deathbychocolate
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a58c65dda34569dc77b25dfb9e850eee327c1ffd -
Trigger Event:
push
-
Statement type:
File details
Details for the file dbc_gptcli-0.26.0-py3-none-any.whl.
File metadata
- Download URL: dbc_gptcli-0.26.0-py3-none-any.whl
- Upload date:
- Size: 133.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ce175e08c2a1ab0c8772e4859cbc402f351480af1b7c1330fdca9ac2d6789ab
|
|
| MD5 |
0f0efc2925daf5f6fc7b48f3568dadf8
|
|
| BLAKE2b-256 |
17c0ddbd9c87e62e4b15217e8cd0e67d88e5d465de724000ee47f8ea3a6a0366
|
Provenance
The following attestation bundles were made for dbc_gptcli-0.26.0-py3-none-any.whl:
Publisher:
release.yml on deathbychocolate/gptcli
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dbc_gptcli-0.26.0-py3-none-any.whl -
Subject digest:
5ce175e08c2a1ab0c8772e4859cbc402f351480af1b7c1330fdca9ac2d6789ab - Sigstore transparency entry: 1110877830
- Sigstore integration time:
-
Permalink:
deathbychocolate/gptcli@a58c65dda34569dc77b25dfb9e850eee327c1ffd -
Branch / Tag:
refs/heads/main - Owner: https://github.com/deathbychocolate
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a58c65dda34569dc77b25dfb9e850eee327c1ffd -
Trigger Event:
push
-
Statement type: