Your very own Assistant. Because you deserve it.
Project description
What is Assistant?
Assistant is an AI-powered shell that blends a full Xonsh terminal with a multi-agent LLM system. Talk to it in natural language, run shell commands, browse the web, read and edit files — all from the same prompt.
Under the hood it uses a multi-agent orchestrator: a main agent receives your input and can delegate tasks to specialised agents (system operations, web research, etc.). Every agent, model, and provider is declared in a single TOML configuration file.
Key Features
| Feature | Description |
|---|---|
| 🤖 Multi-Agent Architecture | Orchestrator + specialised agents (system, web researcher, etc.) |
| 🔌 Multi-Provider Support | Ollama, vLLM, OpenAI — or any OpenAI-compatible API |
| 🛠️ Rich Tool Set | Shell execution, web search (SearXNG), read/edit/write files, read web pages |
| 🧙 Setup Wizard | Interactive wizard generates the TOML config on first launch |
| 🗂️ TOML Configuration | One file (~/.assistant/config.toml) to manage providers, models and agents |
| 💬 Natural Language Shell | Type plain English (or French, etc.) alongside regular commands |
| 🧠 Context Management | Automatic conversation compaction with sliding window and summarisation |
| 🔍 Web Search | Integrated SearXNG web search with a dedicated research agent |
| 📁 File Operations | Agents can read, write and edit files directly |
| 🖥️ TUI Passthrough | Full-screen apps (vim, htop, less, etc.) run without hijacking |
| 🗣️ Voice (optional) | TTS via say and STT via listen |
Quick Start
1. Install
pip install assistant
Other installation methods
From source (latest development):
pip install -U git+https://gitlab.com/waser-technologies/technologies/assistant.git@v2
From a local clone:
git clone https://gitlab.com/waser-technologies/technologies/assistant.git
cd assistant
pip install -U .
Arch Linux (AUR):
pacman -S python-assistant
2. Run the Configuration Wizard
On the first launch the wizard starts automatically. You can also re-run it at any time:
assistant config
The wizard asks you to choose:
- Provider — Ollama (local, recommended), vLLM (local), or OpenAI (API).
- Endpoint — API base URL (e.g.
http://localhost:11434/v1). - API Key — leave the default for local providers, or enter your API key.
- Store in keyring — optionally store the API key securely in your system keyring.
- Model — model identifier (e.g.
qwen3.5:latest,gpt-4o). - Context length — how many tokens the model supports.
- Reasoning — enable/disable thinking traces.
- Web search — enable SearXNG and configure its URL.
The configuration is saved to ~/.assistant/config.toml.
3. Start Assistant
assistant
That's it — you're in. Type commands, ask questions, or mix both.
Requirements
- Python 3.9+
- An LLM backend — one of:
- (optional) SearXNG instance for web search
- (optional)
sayfor text-to-speech - (optional)
listenfor speech-to-text
Note: Unlike v1, there is no local model to download and host yourself. Assistant connects to whichever LLM endpoint you configure. The RAM/VRAM requirements depend entirely on your chosen model and provider.
Usage
CLI options
assistant --help
assistant --version
assistant config # (re)run the configuration wizard
assistant -c "what time is it" # single query, then exit
assistant -n # skip the interactive intro message (warmup)
assistant Hello Assistant # one-shot query
assistant # interactive mode
Interactive Mode
waser@host ~
❯ Hi Assistant.
Hello! How can I assist you today?
waser@host ~
❯ What is the current working directory?
ℹ Executing: echo $PWD
/home/waser
You are currently in your home directory.
waser@host ~
❯ How many moons does Saturn have?
ℹ Searching the Web for: How many moons does Saturn have?
Saturn has 146 confirmed moons as of 2024.
Mix Shell and Natural Language
You can run any valid shell command or Python expression, and when Assistant does not recognise a command it treats your input as a question:
❯ ls -la
❯ echo $USER
❯ Can you count the files in this directory?
ℹ Executing: ls -1 | wc -l
20
There are 20 entries in the current directory.
Multi-Agent Hand-offs
The Orchestrator can hand off tasks to specialised agents:
- System agent — runs shell commands, manages files, executes Python code in the shared xonsh session.
- Web researcher — searches the web via SearXNG, reads web pages, and produces reports with citations.
Hand-offs happen automatically based on the query context.
Exit the Session
Type any exit-like command and Assistant will understand:
❯ exit
❯ Q
❯ :q
❯ quit
❯ stop()
❯ terminate
❯ This conversation is over.
Configuration
The configuration lives in ~/.assistant/config.toml. Here is an example:
[provider.ollama]
endpoint = "http://localhost:11434/v1"
credential = "keyring:ollama"
[model.qwen]
id = "qwen3.5:latest"
name = "Qwen 3.5 (9B)"
provider = "ollama"
context_length = 128000
[agent.orchestrator]
name = "Orchestrator"
model = "qwen"
description = "Main orchestrator; handles general conversations and delegates tasks."
tools = ["xonsh", "web_search", "read_webpage", "read_file", "write_file", "edit_file"]
handoffs = ["system", "web"]
[agent.system]
name = "System"
model = "qwen"
description = "System shell agent for OS interaction."
tools = ["xonsh", "read_file", "write_file", "edit_file"]
[agent.web]
name = "Web Researcher"
model = "qwen"
description = "Web research agent with search and page reading."
tools = ["web_search", "read_webpage"]
[web_search]
enabled = true
searxng_url = "http://localhost:8888"
[execution]
max_turns = 50
temperature = 0.1
max_command_output_length = 25000
think = false
[memory]
enabled = false
[context]
compaction_threshold = 0.8
sliding_window_first = 2
sliding_window_last = 8
use_summarization = true
Adding a New Provider
Just add a new section under [provider]:
[provider.openai]
endpoint = "https://api.openai.com/v1"
credential = "keyring:openai"
[model.gpt4o]
id = "gpt-4o"
name = "GPT-4o"
provider = "openai"
context_length = 128000
Then reference model = "gpt4o" in any agent.
Credential Storage
Credentials are resolved in this order:
- Environment variable —
ASSISTANT_{PROVIDER}_API_KEY(e.g.,ASSISTANT_OPENAI_API_KEY) - System keyring — if credential starts with
keyring:, the value afterkeyring:is used as the keyring service name - OpenAI fallback — for OpenAI provider, also checks
OPENAI_API_KEYstandard env var - Plain text — last resort (not recommended)
To store credentials securely in your system keyring, use assistant config and select "Store in keyring" when prompted, or manually:
import keyring
keyring.set_password("assistant", "openai", "sk-...")
Available Tools
| Tool | Description |
|---|---|
xonsh |
Execute shell commands or Python code in the shared xonsh session |
web_search |
Search the web via SearXNG |
read_webpage |
Fetch and extract readable text from a URL |
read_file |
Read file contents with optional line range |
write_file |
Create or overwrite files |
edit_file |
Find-and-replace text in existing files |
memory_search |
Search persistent memory (when memory is enabled) |
memory_add |
Add entries to persistent memory (when memory is enabled) |
Using Voice
Text-To-Speech
Install say and make sure the service is running:
assistant say Hello World and welcome to everyone.
Speech-To-Text
Install listen and enable the listen services.
Use Assistant as Your Default Shell
This is not recommended in beta!
You should be able to add the location of assistant at the end of /etc/shells. You'll then be able to set Assistant as your default shell using chsh.
sudo sh -c 'w=$(which assistant); echo $w >> /etc/shells'
chsh -s $(which assistant)
Log out and when you come back, Assistant should be your default shell.
Contributions
Want to improve the project? Check out CONTRIBUTING.md.
Credits
Thanks to all the projects that make this possible:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file assistant-2.0.0b6.tar.gz.
File metadata
- Download URL: assistant-2.0.0b6.tar.gz
- Upload date:
- Size: 86.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
785b688052ca05706d1aaf256ca57d10a0265dce1ba0fc4d9bd4f1c70fd6527a
|
|
| MD5 |
2131992bfe1bee50e6931976052b3d25
|
|
| BLAKE2b-256 |
9a23a42fcb5aa09931fd7980325c57a46bd5d5aefe23f846a7cb765a00e57abc
|
File details
Details for the file assistant-2.0.0b6-py3-none-any.whl.
File metadata
- Download URL: assistant-2.0.0b6-py3-none-any.whl
- Upload date:
- Size: 91.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8274194224ffd156bcbcee45f45721506d8c5be875afe5f094b859f9b6a27453
|
|
| MD5 |
28d0e03671d08ae455b05730f88f3f40
|
|
| BLAKE2b-256 |
3c61227bf7f36ecc3c010473a29ce6c05edee706cd5585a5dc4500ff41b14abd
|