Terminal client for OpenAI chat completions
Project description
bsy-clippy
bsy-clippy is a lightweight Python client for the OpenAI Chat Completions API (and compatible deployments).
It supports both batch (stdin) mode for one-shot prompts and interactive mode for chatting directly in the terminal.
You can also load system prompts from a file to guide the LLM’s behavior.
Features
- Speaks to the OpenAI Chat Completions API (or any compatible base URL).
- Loads credentials from
.env(OPENAI_API_KEY) usingpython-dotenv. - Reads defaults (profile, base URL, IP/port overrides, model) from
bsy-clippy.yaml. - Toggle endpoints by editing
api.profileinbsy-clippy.yamlor passing--profileon the CLI. - Defaults to:
- Base URL:
http://172.20.0.100:11434/v1(profileollama) - Model:
qwen3:1.7b - Mode:
stream(see--modeto switch) - Bundled system prompt file that can be overridden with
--system-file
- Base URL:
- Configurable parameters:
-b/--base-url→ explicit API endpoint-i/--ipand-p/--port→ override host/port when targeting compatible servers-M/--model→ model name-m/--mode→ output mode (streamorbatch)-t/--temperature→ sampling temperature (default:0.7)-s/--system-file→ path to a text file with system instructions-u/--user-prompt→ extra user instructions prepended before the data payload-r/--memory-lines→ number of conversation lines to remember in interactive mode-c/--chat-after-stdin→ process stdin once, then drop into interactive chat
- Two modes of operation:
- Batch mode → waits until the answer is complete, then prints only the final result.
- Stream mode (default) → shows response in real-time, tokens appear as they are generated.
- Colored terminal output:
- Yellow = streaming tokens (the model’s “thinking” in progress).
- Default terminal color = final assembled answer.
Installation
pipx (recommended)
pipx install .
After updating the source, reinstall with pipx reinstall bsy-clippy.
pip / virtual environments
pip install .
Configuration
API credentials (.env)
Create a .env file next to where you run bsy-clippy and add your key:
OPENAI_API_KEY=sk-...
The CLI loads this automatically via python-dotenv; environment variables from your shell work too.
YAML defaults (bsy-clippy.yaml)
bsy-clippy.yaml selects which profile to use and what settings belong to it. The packaged example ships with an Ollama profile enabled and an OpenAI profile commented out for reference:
api:
profile: ollama
profiles:
ollama:
base_url: http://172.20.0.100:11434/v1
model: qwen3:1.7b
# openai:
# base_url: https://api.openai.com/v1
# model: gpt-4o-mini
Change profile (or pass --profile openai) to switch endpoints, or add more entries under profiles for additional deployments.
Usage
System prompt file
By default, bsy-clippy loads a bundled prompt (Be very brief. Be very short.).
You can change this with --system-file or disable it via --no-default-system.
Example bsy-clippy.txt:
You are a helpful assistant specialized in cybersecurity.
Always explain your reasoning clearly, and avoid unnecessary markdown formatting.
These lines will be sent to the LLM before every user prompt.
User prompt parameter
Use --user-prompt "Classify the following log:" when piping data so the model receives:
system prompt (if any)
user prompt text
data from stdin or interactive input
Interactive memory
Set --memory-lines 6 (or -r 6) to keep the last six conversation lines (user + assistant) while chatting.
Only the final assistant reply (not the thinking traces) is stored and sent back on the next turn.
Chat after stdin
Use -c / --chat-after-stdin to process piped data first and then remain in interactive mode with the response (and any configured memory) available:
cat sample.txt | bsy-clippy -u "Summarize this report" -r 6 -c
After the initial answer prints, you can continue the conversation while the tool remembers the piped data and the model’s reply.
Interactive mode (default = stream)
Run without piping input:
bsy-clippy
Streaming session looks like:
You: Hello!
LLM (thinking): <think>
Reasoning step by step...
</think>
Hello! How can I assist you today? 😊
Prefer a single print at the end? Switch to batch mode:
bsy-clippy --mode batch
Batch output:
You: Hello!
Hello! How can I assist you today? 😊
Batch mode (stdin)
Pipe input directly:
echo "Tell me a joke" | bsy-clippy
Output:
Why don’t scientists trust atoms? Because they make up everything!
Forcing modes
bsy-clippy --mode batch
bsy-clippy --mode stream
Adjusting temperature
bsy-clippy --temperature 0.2
bsy-clippy --temperature 1.2
Custom server and model
bsy-clippy --base-url http://127.0.0.1:11434/v1 --model llama2
Requirements
See requirements.txt.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bsy_clippy-0.2.2.tar.gz.
File metadata
- Download URL: bsy_clippy-0.2.2.tar.gz
- Upload date:
- Size: 18.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6af432705479aa1b44fbd9e3f22101cf3ac6f1a42a27e7c252421160d0ecdef3
|
|
| MD5 |
b615390c792e50afe3f152e165482948
|
|
| BLAKE2b-256 |
3db64b6b79d655c1f6e5d3318256d17c79ef4bdbd89663238465598806b2bebe
|
File details
Details for the file bsy_clippy-0.2.2-py3-none-any.whl.
File metadata
- Download URL: bsy_clippy-0.2.2-py3-none-any.whl
- Upload date:
- Size: 17.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
411ea5948da0b4ef059c6e47c039dc976eedab0d62cee47c81104f239a87eba7
|
|
| MD5 |
e7cc91b297648a9decdd398dce9f3d06
|
|
| BLAKE2b-256 |
96b6253b3b87199d536c2f20acc6edd43196a6f1a00867cd620c694e2ce1beb6
|