Skip to main content

A lightweight, Python-based CLI tool that integrates LLMs directly into your terminal

Project description

whai - Terminal Assistant

Table of Contents

What is it

whai is a lightweight and fast AI terminal assistant that integrates directly into your native shell. The philosophy of whai is to never interrupt your workflow. You use your terminal as you normally would. It is not a sub-shell or a separate REPL; it is a single, fast binary that you call on-demand. When you get stuck, need a command, or encounter an error, you simply call whai for immediate help.

Core Features

  • Analyze Previous Errors: If a command fails, you don't need to copy-paste. Just call whai (no arguments needed!) or ask whai why did that fail?. It reads the failed command and its full error output from your tmux (terminal multiplexer) history to provide an immediate diagnosis and solution. Note: Command output context is only available when running inside tmux.

  • Persistent Roles (Memory): whai uses simple, file-based "Roles" to provide persistent memory. This is the core of its customization. You define your context once, what machine you are on, what tools are available, your personal preferences, and how you like to work, and whai retains this context for all future interactions.

  • Full Session Context: When running inside tmux, whai securely reads your scrollback to understand both the commands you ran. This provides intelligent, multi-step assistance based on the actual state of your terminal.

  • On-Demand Assistance: Get help exactly when you need it, from command generation to complex debugging, right in your active shell:

    > whai check my docker containers logs for errors

    > whai "how do I debug this high resource usage?"

  • Safe by Design: No command is ever executed without your explicit [a]pprove / [r]eject confirmation.

  • Model-Agnostic: Natively supports OpenAI, Gemini, Anthropic, local Ollama models, and more.

Quick Examples

See whai in action:

Example 1: Running a command

$ whai give me the biggest folder here
Info: Model: gpt-5-mini | Role: default
I'll list all top-level directories (including hidden ones), compute their disk usage, sort them, and show the largest few with human-readable sizes.

I'll run this command:
find . -maxdepth 1 -mindepth 1 -type d -print0 | xargs -0 du -sk 2>/dev/null | sort -n | tail -n 5 | awk '{print $1 "\t" $2}' | numfmt --to=iec-i --suffix=B --field=1,1

Breakdown of what each part does:
- find . -maxdepth 1 -mindepth 1 -type d: Find directories at current level only (excludes . and ..)
- -print0: Use null separator to safely handle spaces in directory names
- xargs -0 du -sk: Pass each directory to du to get disk usage in kilobytes, 2>/dev/null suppresses errors
- sort -n: Sort numerically by size
- tail -n 5: Show the 5 largest entries
- awk '{print $1 "\t" $2}': Format output as size (tab) path
- numfmt --to=iec-i --suffix=B --field=1,1: Convert sizes to human-readable format (KiB, MiB, etc.)

╭─────────────────────────────────────────────────────────────────── Proposed command ────────────────────────────────────────────────────────────────────╮
│ find . -maxdepth 1 -mindepth 1 -type d -print0 | xargs -0 du -sk 2>/dev/null | sort -n | tail -n 5 | awk '{print $1 "\t" $2}' | numfmt --to=iec-i       │
│ --suffix=B --field=1,1                                                                                                                                  │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
[a]pprove / [r]eject / [m]odify: a

╭──────────────────────────────────────────────────────────────────────── Output ─────────────────────────────────────────────────────────────────────────╮
│ 440B ./tests                                                                                                                                            │
│ 1.4KiB ./.git                                                                                                                                           │
│ 87KiB ./.venv_tmp                                                                                                                                       │
│ 108KiB ./.venv                                                                                                                                          │
│ 123KiB ./.venv-linux                                                                                                                                    │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Result (interpretation)
- The largest top-level directory here is ./.venv-linux with ~123KiB.

Next steps (options)
- Show the top N largest items anywhere under the current tree (files+dirs):
  - du -ah . 2>/dev/null | sort -rh | head -n 30
- Show only directories, recursively, largest first:
  - du -h --max-depth=3 2>/dev/null | sort -hr | head -n 30

Example 2: Troubleshooting terminal output

$ uv pip install requirements.txt
Using Python 3.11.9 environment at: .venv-linux
  × No solution found when resolving dependencies:
  ╰─▶ Because requirements-txt was not found in the package registry...

$ whai what is the problem
Info: Model: gpt-5-mini | Role: default
What's happening
- You ran: uv pip install requirements.txt
- pip treated "requirements.txt" as the name of a package to download from the package index, not as a file of requirements.
- The resolver tried to find a package called (normalized) "requirements-txt" on PyPI and failed, so it reports the dependencies are unsatisfiable.

How to fix it
- Use the -r (or --requirement) flag so pip reads the file instead of treating it as a package name.

Commands you can run
1) Check the requirements file exists:
ls -l requirements.txt

2) Install from the file (match how you invoked pip):
uv pip install -r requirements.txt

Do you want me to:
- run the ls to confirm the file exists, or
- run the install command for you now?

Example 3: Calling whai without arguments

$ git push origin main
To https://github.com/user/repo.git
 ! [rejected]        main -> main (non-fast-forward)
error: failed to push some refs to 'https://github.com/user/repo.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.

$ whai
Info: Model: gpt-5-mini | Role: default
What's happening
- You tried to push your local branch "main" to the remote, but git rejected it.
- Your local branch is behind the remote branch—someone else pushed changes to the remote that you don't have locally.
- Git won't let you push because it would overwrite those remote changes.

How to fix it
- Pull the remote changes first, then push. This integrates the remote commits with your local ones.

Commands you can run
1) See what commits are on the remote but not locally:
git fetch
git log HEAD..origin/main

2) Pull and merge the remote changes:
git pull origin main

3) After resolving any merge conflicts (if they occur), push again:
git push origin main

Do you want me to run git pull for you now?

Example 4: Asking questions

$ whai "Quick how do I exit Vim??"
Info: Model: gpt-5-mini | Role: default
I'll show the quick ways to exit Vim and what each one does.

1) Get to Normal mode
- Press Esc (or Ctrl-[) until you stop inserting text. If Esc doesn't respond, try Ctrl-C.

2) Common exit commands (type them in Normal mode, then press Enter)
- :q
  - Quit (works only if no changes since last save).
- :q!
  - Quit and discard any unsaved changes.
- :wq
  - Write (save) and quit.
- :x
  - Write (only if changed) and quit (similar to :wq).

3) Shortcuts
- ZZ (Shift+Z twice) — save and quit (same as :wq if there are changes).
- ZQ — quit without saving (same as :q!).

Installation

Option 1: uv (Recommended)

uv tool install whai

Or even without installing it!

uvx whai "your command"

Option 2: pipx

pipx install whai

Option 3: pip

pip install whai

Option 4: From source

git clone https://github.com/gael-vanderlee/whai.git
cd whai
pip install -e .

Quick Start

1. Configure your API key

On first run, whai launches an interactive configuration wizard:

whai --interactive-config

Or edit ~/.config/whai/config.toml directly:

[llm]
default_provider = "openai"

[llm.openai]
api_key = "sk-proj-your-key-here"
default_model = "gpt-5-mini"

Get API keys from:

Using Local Models (LM Studio)

To use a local model with LM Studio:

  1. Enable the server in LM Studio:

    • Open LM Studio
    • Go to the Developer menu
    • Enable the server toggle
  2. Configure whai:

    whai --interactive-config
    
    • Select lm_studio as the provider
    • Enter the API base URL: http://localhost:1234/v1
    • Enter the model name with lm_studio/ prefix (e.g., lm_studio/llama-3-8b-instruct)

    Note: You can also use the openai/ prefix (e.g., openai/llama-3-8b-instruct) as LM Studio provides an OpenAI-compatible API. Both formats work, but lm_studio/ is the recommended prefix for clarity.

  3. Check available models:

    curl http://localhost:1234/v1/models
    

2. Start using whai

whai "your question"

That's it! whai will:

  • Read your terminal context (commands + output if in tmux, commands only otherwise)
  • Send your question to the configured LLM
  • Suggest commands with [a]pprove / [r]eject / [m]odify prompts
  • Execute approved commands and continue the conversation

Tip: Quotes are not necessary, but do use them if you use special characters like ' or ?

whai show me the biggest file here
whai "what's the biggest file?"

Key Features

Roles

Roles allow you to customize whai's behavior and responses. More importantly, they let you save information about your preferences, system, environment, constraints, and workflow so you don't have to repeat yourself in every conversation.

For example, you can create a role that tells whai to respond only in emoji:

$ whai role create emoji # Write down: "Answer using only emojis
$ whai can you tell me the plot of the first Shrek movie --role emoji
Info: Model: gpt-5-mini | Role: emoji
👑👸💤🐉🏰
👹🏞️🕳️➡️🏰🐴😂
⚔️🐉🔥💨👸
👹❤️👸💚
🌅💋✨💚💚
🎉🎶🧅

But more practically, roles let you store:

  • Your system information (OS, available tools, paths)
  • Your preferences (shell style, preferred commands, workflows)
  • Environment constraints (what you can/can't do, security policies)
  • Project-specific context (tools in use, conventions, setup)
# Create a new role
whai role create my-workflow

# Use it
whai "help me with this task" -r my-workflow

# List all roles
whai role list

Define it once, use it everywhere. Roles are stored in ~/.config/whai/roles/ as Markdown files with YAML frontmatter, like so:

---
model: gpt-5-mini
# Optional parameter you can add here (uncomment if needed):
# temperature: 0.3               # Only used when supported by the selected model
---
You are a helpful terminal assistant.
Describe behaviors, tone, and constraints here.

The default role is defined in the config.

Context Awareness

whai automatically captures context from:

  • tmux scrollback (recommended): Full commands + output for intelligent debugging (only available when running in tmux)
  • Shell history (fallback): Recent commands only when not in tmux (command output is not available in this mode)

Safety First

  • Every command requires explicit approval
  • Modify commands before execution
  • Commands run in isolated subprocess (won't affect your main shell)
  • Press Ctrl+C to interrupt anytime

FAQ

How is this different from X ?

whai is integrated into your terminal with full context awareness. It sees your command history and can execute commands. Most terminal assistants either require you to explicitely start a REPL loop which takes you out of your usual workflow, don't allow for roles, or don't allow to mix natural language conversation and shell execution. I wanted something that's flexible, understands you, and is always ready to help while leaving you in control.

Does it send my terminal history to the LLM?

Only when you run whai. It captures recent history (50 last commands) or tmux scrollback (commands + output) and includes it in the request. If you use a remote API model, it will see your recent terminal history. Note that command output is only available when running inside tmux. You can disable this with the --no-context flag.

Can I use it with local models?

Yes! Configure any LiteLLM-compatible provider, including Ollama for local models. See the configuration section above.

Acknowledgments

Built with LiteLLM for multi-provider support, Typer for the CLI, and Rich for pretty terminal output.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whai-0.6.0.tar.gz (62.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

whai-0.6.0-py3-none-any.whl (76.8 kB view details)

Uploaded Python 3

File details

Details for the file whai-0.6.0.tar.gz.

File metadata

  • Download URL: whai-0.6.0.tar.gz
  • Upload date:
  • Size: 62.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.7

File hashes

Hashes for whai-0.6.0.tar.gz
Algorithm Hash digest
SHA256 7ee9cbc7e9dc03e0258940916bd1afff6e2cf8169f83f51dcba24395097eae99
MD5 b6621a4cd471454a84faf96ad34bc677
BLAKE2b-256 efdc5070b63f5ceb7e4853ea0595901b85fc8c5cfd69194a0946cff706a491ea

See more details on using hashes here.

File details

Details for the file whai-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: whai-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 76.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.7

File hashes

Hashes for whai-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c757664798990b8f92d49b1ccc6c9db18742731d166e6be125861c96155f4d18
MD5 02ed7f8b62362ec6c110ae881c89a30f
BLAKE2b-256 dc339dea86a3b1372f5d8ae4f5faab0a3f6003426fecca118c689b140ec5ea10

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page