A command-line interface for language models
Project description
llme, a CLI assistant for OpenAI-compatible chat servers
A simple, single-file command-line chat client compatible with the OpenAI API.
(or "I just want to quickly test my model hosted with llama.cpp but don't want to spin up openwebui")
Features
- OpenAI API Compatible: Works with any self-hosted LLM platform that supports OpenAI chat completions API.
- Extremely simple: Single file, no installation required (but installation is still available).
- Command-line interface: Run it from the terminal.
- Tools included: Ask it to act on your file system and edit files (yolo).
The basic idea is that LLMs are trained on code and OS configuration and already (machine) learnt to select the probable tools to use and actions to take. Therefore, there is no need to teach them to use made-up function and tools with bad json schemas. Just give them a shell, a python interpreter, and let you (only) live (once).
Use it as a helping (dummy assistant) to inspect configuration, source code, run commands, and edit files.
Installation
Quick-start a local LLM server if you don't have one already
Example with llama.cpp if you use homebrew. Look at https://github.com/ggerganov/llama.cpp for other options
brew install llama.cpp
llama-server -hf unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF --ctx-size 0 --jinja
Example with ollama. Look at https://ollama.com/download for other options
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen3-coder:30b
Qwen3-Coder-30b is a nice model. Smaller models can can also works. See the benchmark for a comparison.
llme
Chose your preferred installation or execution method.
Install from PyPI (possibily an old version)
pipx install llme-cli
llme --help
Install from GitHub directly (latest dev version)
pipx install -f git+https://github.com/privat/llme.git
llme --help
Clone then install in development mode
git clone https://github.com/privat/llme.git
pipx install -e ./llme
llme --help
Clone and run from source (no installation)
git clone https://github.com/privat/llme.git
pip install -r llme/requirements.txt
./llme/llme/main.py --help
Usage
Run an interactive chat session
llme --base-url "http://localhost:8080/v1" # for default llama-server (llama.cpp)
llme --base-url "http://localhost:11434/v1" # for default ollama server
or if you want to a specific model
llme --base-url "http://localhost:8080/v1" --model "unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF"
Ctrl-C to interrupt a response (or exit).
Set up a config (optional, but recommended):
Edit ~/.config/llme/config.toml
Look at config.toml for an example.
More about options and configs bellow.
I assume, from now, that there is a config file...
Run one-shots queries
Each prompt is run in order in the same chat session.
llme "What is the capital of France?" \
"What the content of the current directory?" \
"What is the current operating system?" \
"What is the factorial of 153?" \
"What is the weather at Tokyo right now?"
You can also pipe the query:
echo "What is the capital of France?" | llme
Note that interactive sessions are often better because, if needed, the model is loaded at the start of the command, so is loading while you type.
Also no issues with escaping " or '
Tools included
The LLM has direct access to your shell (and files) and a python interpreter. The user is asked for confirmation before executing any command. Beware, some LLMs might be very persistent and persuasive in running dangerous commands. Do not trust the LLM blindly!
If you chose to not execute a command, it will be skipped, and you can provide an explanation to the LLM or asks for a better command.
Some LLM might insist on not using a tool, asking the user to do it manually, or just simulate the action. A better prompt engineering might help. Proposals to improve the default system prompt are always welcome.
Inspect content of files or stdin
ps aux | llme "Which process consumes the most memory?"
you can also use file paths as assets to a prompt:
llme "how many regular users and regular groups are there in these files?" /etc/passwd /etc/group
Note: the file content and the path will be given to the LLM.
Inspect images (for multimodal models)
Same as for files, but with images — duh, images are files!
llme "What is in this image?" < image.png
you can still use paths:
llme "What is in this image?" image.png
Run yolo
Note: no warranty, yada yada, etc. llme can just kill your OS and cats. Do not run the following command without understanding what it does.
sudo llme --batch --yolo "Distupgrade the system. You are root! Do as you wish."
Options (and config)
$ llme --help
usage: llme [options...] [prompts...]
OpenAI-compatible chat CLI.
positional arguments:
prompts An initial list of prompts
options:
-h, --help show this help message and exit
-u, --base-url BASE_URL
API base URL [base_url]
-m, --model MODEL Model name [model]
--list-models List available models then exit
--api-key API_KEY The API key [api_key]
-b, --batch Run non-interactively. Implicit if stdin is not a tty
[batch]
-p, --plain No colors or tty fanciness. Implicit if stdout is not
a tty [plain]
--bulk Disable stream-mode. Not that useful but it helps
debugging APIs [bulk]
-o, --chat-output CHAT_OUTPUT
Export the full raw conversation in json
-i, --chat-input CHAT_INPUT
Continue a previous (exported) conversation
--export-metrics EXPORT_METRICS
Export metrics, usage, etc. in json
-s, --system SYSTEM_PROMPT
System prompt [system_prompt]
--temperature TEMPERATURE
Temperature of predictions [temperature]
--tool-mode {markdown,native}
How tools and functions are given to the LLM
[tool_mode]
-c, --config CONFIG Custom configuration files
--list-tools List available tools then exit
--dump-config Print the effective config and quit
--plugin PLUGINS Add additional tool (python file or directory)
[plugins]
-v, --verbose Increase verbosity level (can be used multiple times)
-Y, --yolo UNSAFE: Do not ask for confirmation before running
tools. Combine with --batch to reach the singularity.
--version Display version information and quit
Note: Run a fresh --help in case I forgot to update this README.
All options with names in brackets can be set in the config file (base_url for --base-url).
They can also be set by environment variables (LLME_BASE_URL for --base-url).
For each option, the precedence order is the following:
- The explicit option in the command line (the higher precedence)
- The explicit config files (given by
--config) in reverse order (last wins) - The environment variables (
LLME_SOMETHING) - The user configuration file (
~/.config/llme/config.toml) - The system configuration file provided by the package (the lowest precedence)
Slash Commands
Special commands can be executed during the chat.
Those starts with a / and can be used when a prompt is expected (interactively or in the command line).
The command /help show the available slash commands.
$ llme /help /quit
/models list available models
/tools list available tools
/metrics list current metrics
/retry cancel and regenerate the last assistant message
/undo cancel the last user message (and the response)
/edit run EDITOR on the chat (save,editor,load)
/save FILE save chat
/load FILE load chat
/config list configuration options
/set OPT=VAL change a config option
/quit exit the program
/help show this help
Note: Run a fresh /help in case I forgot to update this README.
Library, plugin system, and custom tools
Important: the API is far from stable.
LLME is usable as a library, so you can use its features.
The main advantage for now to import llme is to add new custom tools usable by LLMs.
You can transform a python function into a tool with the annotation @llme.tool.
Look at weather_plugin.py for an example.
Usages:
Run the weather plugin as a standalone program (it disables all LLM tools except the weather one).
./examples/weather_plugin.py 'Will it rains tomorrow at Paris?'
Use llme with the --plugin option to add one (or more) plugin and bring in all their tools.
llme --plugin examples/weather_plugin.py 'Will it rains tomorrow at Paris?'
Or whole directories!
llme --plugin examples 'Will it rains tomorrow at Paris?'
Development
I do not like Python, nor LLMs, but I needed something simple to test things quickly and play around. My goal is to keep this simple and minimal: it should fit into a single file and still be manageable.
PR are welcome!
TODO
- OpenAI API features
- API token (untested)
- list models
- stream mode
- bulk mode (non stream mode)
- thinking mode
- multimodal
- attached files
- attached images
- ?
- Tools
- markdown tools
- native tools
- run shell command
- run Python code
- user-defined tools
- sandboxing
- whitelist/blacklist
- User interface & features
- readline
- better prompt & history
- braille spinner
- model warmup
- save/load conversation
- export metrics/usage/statistics
- slash commands
- undo/retry/edit
- better tool reporting
- Customization and models
- config files
- config with env vars
- type check / conversion
- plugin system
- better tool selection
- temperature
- other hyper parameters
- handle non-conform thinking & tools
- detect model features (is that even possible?)
- bench system & reporting
- Code quality
- docstring and comments
- small code base
- small methods
- logging
- tests suites
- better separation of CLI and LLM
- better libification
- Misc
- README
- TODO list :p
- build file
- PyPI package
- plugin example
- ?
OpenAI API
The two HTML routes used by llme are:
$base_url/models(https://platform.openai.com/docs/api-reference/models) for--list-models(and to get a default model when--modelis empty)$base_url/chat/completions(https://platform.openai.com/docs/api-reference/chat) for the main job. Streaming (https://platform.openai.com/docs/api-reference/chat-streaming) is used by default. It can be disabled with--bulk, mainly for debugging weird APIs.
Images are uploaded as content parts, for multimodal models.
Tools are integrated with either --tool-mode=native for the native function API (https://platform.openai.com/docs/guides/function-calling), or with --tool-mode=markdown a custom approach intended for models that does not support it (or performs poorly with it).
Custom tools can be profited, see the --plugin option.
Issues
- The various OpenAI compatible servers and models implement different subsets. Compatibility is worked on and there is less random 4xx or 5xx responses. Major local LLM servers and servers were tested. See the benchmark
- Models are really sensitive to prompts and system prompts, but you can create a custom config file for each.
- Models are really sensitive to how the messages are structured, unfortunately that is currently hardcoded in the program. I do not want to hard-code many tweaks and workarounds. :(
Thanks
- openwebui for an inspiration, but too complex and web oriented.
- gptme for another inspiration, but also too complex and targets too much non-local LLMs.
- openai-cli for a simpler approach I built on top of.
- llama.cpp, nexa-sdk and others for your great work.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llme_cli-0.1.3.tar.gz.
File metadata
- Download URL: llme_cli-0.1.3.tar.gz
- Upload date:
- Size: 62.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
38fa19226b0757e909e19e7bc5486bcb8d8b28957a7a3dec60645dfa50b8e5ff
|
|
| MD5 |
198d28822bdb0753d1f51d7c9ebfe35e
|
|
| BLAKE2b-256 |
0797164f6371de8fc896945a705aea4faa1aa81ca895e7561a935887bd4517b1
|
File details
Details for the file llme_cli-0.1.3-py3-none-any.whl.
File metadata
- Download URL: llme_cli-0.1.3-py3-none-any.whl
- Upload date:
- Size: 34.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
55251638e1684d144a027d014e32db06817983789d4b6149d7ae8b3d9d26500f
|
|
| MD5 |
5502f660da33ad1d5d7a9f7db10e4a7a
|
|
| BLAKE2b-256 |
adbb12aefd7b757e0058e0b570eb0bbaaa6c5a44c5f2cd1438777b4160d7555e
|