A next-generation, highly configurable, and user-friendly Command Line Interface for interacting with Large Language Models.
Project description
k-ai
k-ai is a terminal-first LLM chat system with persistent sessions, runtime transparency, live config mutation, internal tools, and a Python package API.
It is designed around one principle: the chat loop, the slash commands, and the programmatic API should all act on the same session/config/runtime model.
Core Model
┌─────────────────────────────────────┐
│ Built-in Defaults │
│ src/k_ai/defaults/defaults.d/ │
└─────────────────┬───────────────────┘
│ merge
┌─────────────────▼───────────────────┐
│ ConfigManager │
│ override file + live edits + CLI │
└─────────────────┬───────────────────┘
│
┌──────────────────────────▼──────────────────────────┐
│ ChatSession │
│ prompt loop · tools · digest · compaction · UI │
└───────────────┬───────────────────────┬─────────────┘
│ │
┌───────────▼──────────┐ ┌───────▼────────┐
│ SessionStore │ │ MemoryStore │
│ ~/.k-ai/sessions/*.jsonl│ │ ~/.k-ai/MEMORY │
└──────────────────────┘ └────────────────┘
Features
- Persistent chat sessions with
summary,themes, andsession_type. - Rich runtime transparency: provider, model, auth mode, token source, context window, compaction threshold, limits.
- Human-in-the-loop tool approvals with per-tool governance.
- Full config management from chat, slash commands, or Python.
- Sandboxed Python and shell tools.
- QMD-backed history/document retrieval restricted to the
k-aisession collection when appropriate. - Robust interruption handling for prompt input, generation, and tool execution.
- Split default config fragments with cached loading for better maintainability and lower parse overhead.
Problem-First Docs
Long-form architecture docs now live in the standalone docs site:
- Live docs site: kpihx.github.io/k-ai-docs
- Docs source repo: github.com/KpihX/k-ai-docs
- Local docs entrypoint:
docs/README.md
They are written in the same spirit as tutos_live:
- problem first
- real examples
- ASCII diagrams
- request payload examples
- session / memory / tool-governance workflows
Quick Start
git clone https://github.com/kpihx/k-ai.git
cd k-ai
make install
k-ai chat
Installation profiles:
- editable defaults:
install/install.yaml - installer docs:
install/README.md
Installer behavior highlights:
- interactive by default, with explicit choices shown for each meaningful case
- prefers
uvwhen available - if
uvis missing, proposes installing it - if
uvis declined, falls back to an isolatedk-aibootstrap virtualenv instead of polluting the system Python - asks which live capability families should start enabled:
exa,python,shell,qmd
You can keep the default interactive install, explicitly target the default profile, or point to your own:
./scripts/install.sh
./scripts/install.sh -p
./scripts/install.sh -p defaults
./scripts/install.sh --path /path/to/my-install.yaml
Development:
uv sync --dev
uv run pytest -q
uv run k-ai chat
Published package identity:
- PyPI distribution name:
kpihx-ai - import module:
k_ai - installed CLI command:
k-ai
If you install from PyPI instead of from source:
uv tool install kpihx-ai
# or
pipx install kpihx-ai
Installation and Removal
Install:
make install
# or directly:
./scripts/install.sh
# or with an explicit install profile:
./scripts/install.sh -p defaults
./scripts/install.sh --path ./install/install.yaml
Purge runtime state:
make purge
# or directly:
./scripts/purge.sh
Make targets:
make install
make purge
make check
make test
make build
make publish
make push
make push-docs
make release
CLI Usage
Interactive chat
k-ai chat
k-ai chat --provider mistral
k-ai chat --provider openai --model gpt-4o
k-ai chat --config ~/.k-ai/config.yaml
k-ai chat --temperature 0.2 --max-tokens 4096
Config CLI
Show the full built-in default template:
k-ai config show
List built-in config fragments:
k-ai config sections
Show only selected built-in fragments:
k-ai config show --section ui
k-ai config show --section models --section governance
Export the full default config:
k-ai config get -o my-config.yaml
Export only one or several sections to build a minimal override file:
k-ai config get -o prompts.yaml --section ui
k-ai config get -o providers-and-tools.yaml --section models --section governance
Open the active config or one built-in fragment in your editor:
k-ai config edit all
k-ai config edit ui
k-ai config edit governance
/config edit governance
Editor resolution order:
config.editorK_AI_EDITORVISUALEDITORnano
Tool proposal transparency:
cli.show_tool_rationale: truekeeps a justification panel visible before each tool.- if the model emits no explanation,
k-aiderives a fallback rationale from the tool description and main input.
OAuth note:
oauth.geminiis implemented through a Google token JSON file.token_pathshould point to a persisted token containing at leastaccess_token.- If the token is expired,
refresh_token,client_id, andclient_secretare used to refresh it automatically.
Run diagnostics:
k-ai doctor
k-ai doctor --reset config
k-ai doctor --reset all
Slash Commands
Session lifecycle:
/sessions [recent|oldest] [classic|meta]/load <id> [last_n]/extract <id> [offset] [limit]/digest [id]/compact/delete <id>/new [classic|meta]
Runtime/config:
/status/tokens/settings [prefix]/set <key> <value>/model [name]/provider [name] [model]/tools capabilities/tools enable|disable <exa|python|shell|qmd>/config show [key]/config show section:<name> [section:<name> ...]/config get [path] [section ...]/config save [path]/config sections
Tools and memory:
- live capability switching only applies to mutable families (
exa,python,shell,qmd) - protected admin approval rules remain YAML-only by design
/tools show [ask|auto|default|session|global|protected]/tools ask|auto <target> [session|global] [tool|category|risk]/tools reset <target> [session|global] [tool|category|risk]/memory list|add|remove/qmd query|search|get|ls|status|update|embed|cleanup
Everything above can also be triggered by the model through internal tools when appropriate.
Config Layout
Built-in defaults are split into four fragments:
src/k_ai/defaults/defaults.d/
├── 00-models.yaml
├── 10-ui-prompts.yaml
├── 20-sessions-memory.yaml
└── 30-runtime-governance.yaml
Section names exposed in CLI:
modelsuisessionsgovernance
Recommended override strategy:
1. Export only the sections you want to change.
2. Edit that smaller YAML file.
3. Pass it with --config or save it as ~/.k-ai/config.yaml.
4. Keep runtime-only experiments in chat via /set or the config tools.
Package Usage
Defaults only
from k_ai import ConfigManager, ChatSession
import asyncio
cm = ConfigManager()
session = ChatSession(cm)
asyncio.run(session.send("Bonjour"))
Custom override file
from k_ai import ConfigManager, ChatSession
cm = ConfigManager(override_path="~/.k-ai/config.yaml")
session = ChatSession(cm, provider="mistral")
You can also keep several smaller override files and choose one at startup:
cm = ConfigManager(override_path="~/profiles/k-ai-prompts.yaml")
Inline overrides
cm = ConfigManager(
override_path="~/.k-ai/config.yaml",
temperature=0.2,
max_tokens=4096,
)
Export only one built-in section
from k_ai import ConfigManager
yaml_text = ConfigManager.get_default_yaml(sections=["ui"])
print(yaml_text)
List built-in sections
from k_ai import ConfigManager
for section in ConfigManager.list_default_sections():
print(section["name"], section["file"])
Agentic programmatic call with tools
import asyncio
from k_ai import ConfigManager, ChatSession, ToolCall
cm = ConfigManager()
session = ChatSession(cm)
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
},
}]
async def executor(tc: ToolCall) -> str:
if tc.function_name == "get_weather":
return f"22°C in {tc.arguments['location']}"
raise ValueError(tc.function_name)
result = asyncio.run(session.send_with_tools("Weather in Paris?", tools, executor))
print(result)
Runtime Transparency
The terminal runtime panel exposes:
- current provider / model / auth mode
- context usage and remaining capacity
- compaction threshold
- cumulative tokens
- token source:
providerorestimated - render mode
- tool result display/history limits
- config persistence path
- current session id / type
This is UI-only telemetry; it does not consume model tokens.
Robustness Notes
Ctrl+Cat prompt: first press cancels input, second press exits.Ctrl+Cduring generation or tool execution: returns control to the prompt.- Boot greeting failures do not create a session.
- Programmatic
send()/send_with_tools()now rollback the whole turn on LLM failure instead of leaving partial persisted turns. - Digest/compaction/exit summarization are best-effort; if the provider fails, the session remains usable and the main conversation state is preserved.
- Tool approval overrides are validated strictly against the built-in tool catalog, so malformed config fails fast instead of silently drifting.
Runtime State on Disk
~/.k-ai/
├── config.yaml
├── MEMORY.json
├── sandbox/
└── sessions/
├── index.json
└── <session-id>.jsonl
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kpihx_ai-0.1.0.tar.gz.
File metadata
- Download URL: kpihx_ai-0.1.0.tar.gz
- Upload date:
- Size: 96.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59d297bd8859e79db9bdcc3bf6028babc1008885239e29b7ab6aac682edd2208
|
|
| MD5 |
2d8e1737a15cd350cb46705f7c0c55dc
|
|
| BLAKE2b-256 |
8686fc30774de19aff3f3cfddd7b692982a3a7825787dcf8b3d6b0fe8b450d62
|
File details
Details for the file kpihx_ai-0.1.0-py3-none-any.whl.
File metadata
- Download URL: kpihx_ai-0.1.0-py3-none-any.whl
- Upload date:
- Size: 110.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
003f3fe4712e039183ccb3d15874871c868c38d75c0ad7fb727444a00b80719c
|
|
| MD5 |
6101eaa91a179547ebd9c277bf01d564
|
|
| BLAKE2b-256 |
e5064629717d7527e58aeb9c507fd2ff00d62f35a12eeb40235451706e6c628e
|