Skip to main content

Your very own Assistant. Because you deserve it.

Project description

assistant

Assistant

Your very own Assistant. Because you deserve it.

pipeline status Latest Release PyPi Status


What is Assistant?

Assistant is an AI-powered shell that blends a full Xonsh terminal with a multi-agent LLM system. Talk to it in natural language, run shell commands, browse the web, read and edit files — all from the same prompt.

Under the hood it uses a multi-agent orchestrator: a main agent receives your input and can delegate tasks to specialised agents (system operations, web research, etc.). Every agent, model, and provider is declared in a single TOML configuration file.

Key Features

Feature Description
🤖 Multi-Agent Architecture Orchestrator + specialised agents (system, web researcher, etc.)
🔌 Multi-Provider Support Ollama, vLLM, OpenAI — or any OpenAI-compatible API
🛠️ Rich Tool Set Shell execution, web search (SearXNG), read/edit/write files, read web pages
🧙 Setup Wizard Interactive wizard generates the TOML config on first launch
🗂️ TOML Configuration One file (~/.assistant/config.toml) to manage providers, models and agents
💬 Natural Language Shell Type plain English (or French, etc.) alongside regular commands
🧠 Context Management Automatic conversation compaction with sliding window and summarisation
🔍 Web Search Integrated SearXNG web search with a dedicated research agent
📁 File Operations Agents can read, write and edit files directly
🖥️ TUI Passthrough Full-screen apps (vim, htop, less, etc.) run without hijacking
🗣️ Voice (optional) TTS via say and STT via listen

Quick Start

1. Install

pip install assistant
Other installation methods

From source (latest development):

pip install -U git+https://gitlab.com/waser-technologies/technologies/assistant.git@v2

From a local clone:

git clone https://gitlab.com/waser-technologies/technologies/assistant.git
cd assistant
pip install -U .

Arch Linux (AUR):

pacman -S python-assistant

2. Run the Configuration Wizard

On the first launch the wizard starts automatically. You can also re-run it at any time:

assistant config

The wizard asks you to choose:

  1. Provider — Ollama (local), vLLM (local), or OpenAI (API).
  2. Endpoint — API base URL (e.g. http://localhost:11434/v1).
  3. API Key — leave the default for local providers, or enter your API key.
  4. Store in keyring — optionally store the API key securely in your system keyring.
  5. Model — model identifier (e.g. qwen3.5:latest, gpt-4o).
  6. Context length — how many tokens the model supports.
  7. Reasoning — enable/disable thinking traces.
  8. Web search — enable SearXNG and configure its URL.

The configuration is saved to ~/.assistant/config.toml.

3. Start Assistant

assistant

That's it — you're in. Type commands, ask questions, or mix both.


Requirements

  • Python 3.9+
  • An LLM backend — one of:
    • Ollama running locally (recommended)
    • vLLM running locally
    • Any OpenAI-compatible API (OpenAI, Together, Groq, etc.)
  • (optional) SearXNG instance for web search
  • (optional) say for text-to-speech
  • (optional) listen for speech-to-text

Note: Unlike v1, there is no local model to download and host yourself. Assistant connects to whichever LLM endpoint you configure. The RAM/VRAM requirements depend entirely on your chosen model and provider.


Usage

CLI options

assistant --help
assistant --version
assistant config              # (re)run the configuration wizard
assistant -c "what time is it" # single query, then exit
assistant -n                   # skip the interactive intro message (warmup)
assistant Hello Assistant      # one-shot query
assistant                      # interactive mode

Interactive Mode

Loading interactive demo preview... (Click to open in full)

waser@host ~ 
❯ Hi Assistant.
Hello! How can I assist you today?
waser@host ~ 
❯ What is the current working directory?

ℹ Executing: echo $PWD
/home/waser

You are currently in your home directory.
waser@host ~ 
❯ How many moons does Saturn have?

ℹ Searching the Web for: How many moons does Saturn have?

Saturn has 146 confirmed moons as of 2024.

Mix Shell and Natural Language

You can run any valid shell command or Python expression, and when Assistant does not recognise a command it treats your input as a question:

❯ ls -la
❯ echo $USER
❯ Can you count the files in this directory?

ℹ Executing: ls -1 | wc -l
20

There are 20 entries in the current directory.

Multi-Agent Hand-offs

The Orchestrator can hand off tasks to specialised agents:

  • System agent — runs shell commands, manages files, executes Python code in the shared xonsh session.
  • Web researcher — searches the web via SearXNG, reads web pages, and produces reports with citations.

Hand-offs happen automatically based on the query context.

Exit the Session

Type any exit-like command and Assistant will understand:

❯ exit
❯ Q
❯ :q
❯ quit
❯ stop()
❯ terminate
❯ This conversation is over.

Configuration

The configuration lives in ~/.assistant/config.toml. Here is an example:

[provider.ollama]
endpoint = "http://localhost:11434/v1"
credential = "keyring:ollama"

[model.qwen]
id = "qwen3.5:latest"
name = "Qwen 3.5 (9B)"
provider = "ollama"
context_length = 128000

[agent.orchestrator]
name = "Orchestrator"
model = "qwen"
description = "Main orchestrator; handles general conversations and delegates tasks."
tools = ["xonsh", "web_search", "read_webpage", "read_file", "write_file", "edit_file"]
handoffs = ["system", "web"]

[agent.system]
name = "System"
model = "qwen"
description = "System shell agent for OS interaction."
tools = ["xonsh", "read_file", "write_file", "edit_file"]

[agent.web]
name = "Web Researcher"
model = "qwen"
description = "Web research agent with search and page reading."
tools = ["web_search", "read_webpage"]

[web_search]
enabled = true
searxng_url = "http://localhost:8888"

[execution]
max_turns = 50
temperature = 0.1
max_command_output_length = 25000
think = false

[memory]
enabled = false

[context]
compaction_threshold = 0.8
sliding_window_first = 2
sliding_window_last = 8
use_summarization = true

Adding a New Provider

Just add a new section under [provider]:

[provider.openai]
endpoint = "https://api.openai.com/v1"
credential = "keyring:openai"

[model.gpt4o]
id = "gpt-4o"
name = "GPT-4o"
provider = "openai"
context_length = 128000

Then reference model = "gpt4o" in any agent.

Credential Storage

Credentials are resolved in this order:

  1. Environment variableASSISTANT_{PROVIDER}_API_KEY (e.g., ASSISTANT_OPENAI_API_KEY)
  2. System keyring — if credential starts with keyring:, the value after keyring: is used as the keyring service name
  3. OpenAI fallback — for OpenAI provider, also checks OPENAI_API_KEY standard env var
  4. Plain text — last resort (not recommended)

To store credentials securely in your system keyring, use assistant config and select "Store in keyring" when prompted, or manually:

import keyring
keyring.set_password("assistant", "openai", "sk-...")

Available Tools

Tool Description
xonsh Execute shell commands or Python code in the shared xonsh session
web_search Search the web via SearXNG
read_webpage Fetch and extract readable text from a URL
read_file Read file contents with optional line range
write_file Create or overwrite files
edit_file Find-and-replace text in existing files
memory_search Search persistent memory (when memory is enabled)
memory_add Add entries to persistent memory (when memory is enabled)

Using Voice

Text-To-Speech

Install say and make sure the service is running:

assistant say Hello World and welcome to everyone.

Speech-To-Text

Install listen and enable the listen services.

Use Assistant as Your Default Shell

This is not recommended in beta!

You should be able to add the location of assistant at the end of /etc/shells. You'll then be able to set Assistant as your default shell using chsh.

sudo sh -c 'w=$(which assistant); echo $w >> /etc/shells'
chsh -s $(which assistant)

Log out and when you come back, Assistant should be your default shell.

Contributions

Want to improve the project? Check out CONTRIBUTING.md.


Credits

Thanks to all the projects that make this possible:

  • Xonsh — the best snail in the jungle
  • Rich and Halo — beautiful terminal output
  • And many, many more.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

assistant-2.0.1b1.tar.gz (97.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

assistant-2.0.1b1-py3-none-any.whl (101.6 kB view details)

Uploaded Python 3

File details

Details for the file assistant-2.0.1b1.tar.gz.

File metadata

  • Download URL: assistant-2.0.1b1.tar.gz
  • Upload date:
  • Size: 97.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for assistant-2.0.1b1.tar.gz
Algorithm Hash digest
SHA256 455f7805d1f9308553441689006d2e0134f990fa4b4867e6fc00454a67aa1efb
MD5 a3b4f58379bc5e39a5650a064d5f6019
BLAKE2b-256 076b218b7cf734af59d596345a340f084852280070712d43816757a728ed0f1d

See more details on using hashes here.

File details

Details for the file assistant-2.0.1b1-py3-none-any.whl.

File metadata

  • Download URL: assistant-2.0.1b1-py3-none-any.whl
  • Upload date:
  • Size: 101.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for assistant-2.0.1b1-py3-none-any.whl
Algorithm Hash digest
SHA256 c9269234285a2b15ff3f8a5d534522dfff651dbf54b4d928aaba98c802ce34e4
MD5 26f53514770e8326c93938791e01da21
BLAKE2b-256 2758bc4872ff6f5ee69371819df09c2365159745c0baf0b5d85d3a6b8c7bf44d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page