Skip to main content

Natural language shell interface — a local-first agentic TUI powered by a bundled LLM

Project description

NatShell

PyPI version

Natural language shell interface for Linux, macOS, and WSL — a local-first agentic TUI powered by a bundled LLM.

Type requests in plain English and NatShell plans and executes shell commands to fulfill them, using a ReAct-style agent loop with a bundled local model via llama.cpp. Supports optional remote inference via Ollama or any OpenAI-compatible API.

Install

From PyPI

pip install natshell              # Remote/Ollama mode (no C++ compiler needed)
pip install natshell[local]       # Includes llama-cpp-python for local inference

From source (recommended for GPU acceleration)

git clone https://github.com/Barent/natshell.git && cd natshell
bash install.sh

The installer handles everything — Python venv, GPU detection (Vulkan/Metal/CPU), llama.cpp build, model download, and Ollama configuration. No sudo required. Missing system dependencies (C++ compiler, clipboard tools, Vulkan headers, etc.) are detected and offered for install automatically.

Model options during install:

Preset Model Size Best for
Light Qwen3-4B (Q4_K_M) ~2.5 GB Low RAM systems, fast responses
Standard Qwen3-8B (Q4_K_M) ~5 GB Better reasoning and code quality
Both 4B + 8B ~7.5 GB Switch between them at runtime
Remote only Ollama server 0 GB Offload to a remote machine

The 8B model is significantly more capable for multi-step tasks, code editing, and complex reasoning. Choose Standard if your system has at least 8 GB RAM (or a GPU with 6+ GB VRAM).

Development setup

git clone https://github.com/Barent/natshell.git && cd natshell
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pip install llama-cpp-python                # CPU-only
# CMAKE_ARGS="-DGGML_VULKAN=on" pip install llama-cpp-python --no-cache-dir  # Vulkan (Linux)
# CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python --no-cache-dir   # Metal (macOS)
natshell

Usage

natshell                          # Launch with defaults (local model)
natshell --model ./my-model.gguf  # Use a specific GGUF model
natshell --remote http://host:11434/v1 --remote-model qwen3:4b  # Use Ollama/remote API
natshell --download               # Download the default model and exit
natshell --update                 # Self-update from git and reinstall
natshell --config path/to/config.toml  # Custom config file
natshell --verbose                # Enable debug logging
natshell --headless "list files"  # Single-shot non-interactive mode (stdout pipeable)
natshell --headless --danger-fast "deploy" # Headless with auto-approve confirmations
natshell --mcp                    # Start as MCP server (stdin/stdout JSON-RPC)

Features

Agent Loop

NatShell uses a ReAct-style agent loop — the model reasons about your request, calls tools (shell commands, file operations, etc.), observes results, and iterates until the task is complete. Up to 15 tool calls per request by default.

Inference Backends

  • Local: Bundled llama.cpp via llama-cpp-python. Two model tiers: Qwen3-4B (~2.5 GB, light) and Qwen3-8B (~5 GB, standard). Selected during install, auto-downloaded on first run.
  • Remote: Any OpenAI-compatible API — Ollama, vLLM, LM Studio, etc.
  • Fallback: If the remote server is unreachable, NatShell automatically falls back to the local model.
  • Runtime switching: Switch models on the fly with /model commands without restarting.

GPU Acceleration

  • Auto-detects GPUs via vulkaninfo, nvidia-smi, and lspci
  • Prefers discrete GPUs over integrated on multi-GPU systems
  • Supports Vulkan (Linux/AMD/NVIDIA), Metal (macOS), and CPU fallback
  • Prints helpful reinstall instructions if GPU support is missing

Tools

The agent has access to 9 tools:

  • execute_shell — Run any shell command via bash
  • read_file — Read file contents
  • write_file — Write or append to files (always requires confirmation)
  • edit_file — Targeted search-and-replace edits (always requires confirmation)
  • run_code — Execute code snippets in 10 languages (Python, JS, Bash, Ruby, Perl, PHP, C, C++, Rust, Go)
  • list_directory — List directory contents with sizes and types
  • search_files — Search file contents (grep) or find files by name
  • git_tool — Structured git operations (status, diff, log, branch, commit, stash)
  • natshell_help — Look up NatShell documentation by topic

TUI Commands

Command Description
/help Show available commands
/clear Clear chat and model context
/cmd <command> Execute a shell command directly (bypasses AI, respects safety)
/model Show current engine and model info
/model list List models available on the remote server
/model use <name> Switch to a remote model
/model switch Switch local GGUF model (opens command palette)
/model local Switch back to local model
/model default <name> Save default remote model to config
/compact Summarize conversation to free context window space
/plan <description> Generate a step-by-step plan (PLAN.md) from natural language
/exeplan run PLAN.md Execute a previously generated plan
/undo Undo the last file edit (restores from backup)
/save [name] Save current conversation to a session file
/load <id> Load a saved conversation session
/sessions List all saved sessions
/keys Show keyboard shortcuts
/history Show conversation message count

Keyboard Shortcuts

Key Action
Ctrl+C Quit
Ctrl+E Copy entire chat to clipboard
Ctrl+L Clear chat
Ctrl+P Command palette (model switching)
Ctrl+Y Copy selected text

Backup & Undo

Every file edit creates a timestamped backup in ~/.local/share/natshell/backups/. Use /undo to restore the most recent edit. Backups are pruned to 10 per file by default.

Session Persistence

Save and restore conversations with /save, /load, and /sessions. Sessions are stored as JSON in ~/.local/share/natshell/sessions/.

Headless Mode

Run NatShell non-interactively with --headless "prompt". Response text goes to stdout (pipeable), everything else to stderr. Use --danger-fast to auto-approve confirmations.

MCP Server

Run NatShell as an MCP (Model Context Protocol) server with --mcp. Exposes all tools via JSON-RPC over stdin/stdout for integration with other AI tools.

Plugin System

Extend NatShell with custom tools by placing Python files in ~/.config/natshell/plugins/. Each plugin defines a register() function that receives the tool registry.

Prompt Caching

System prompt tokens are cached across requests to reduce latency on local inference. Cache is invalidated when the system prompt changes.

Diff Preview

File edits show a unified diff preview in the confirmation dialog, making it easier to review changes before approving.

Safety

Commands are classified into three risk levels by a fast, deterministic regex-based classifier:

  • Safe — auto-executed (ls, cat, df, grep, etc.)
  • Confirm — requires user approval (rm, sudo, apt install, docker rm, iptables, etc.)
  • Blocked — never executed (fork bombs, rm -rf /, destructive dd/mkfs to disks, etc.)

Additional safety features:

  • Commands chained with &&, ||, ;, &, or | are split and each sub-command is classified independently
  • Subshell expressions ($(...)) and backtick expansions are flagged for confirmation
  • Sensitive file paths (SSH keys, /etc/shadow, .env) require confirmation for read_file
  • Sensitive environment variables (API keys, tokens, credentials) are filtered from subprocesses
  • Sudo passwords are cached for 5 minutes with automatic expiry
  • LLM output is escaped to prevent Rich markup injection in the TUI
  • API keys sent over plaintext HTTP trigger a warning

Safety modes are configurable: confirm (default), warn, or yolo. All patterns are customizable in config.

Configuration

Default configuration is bundled with the package. Copy it to ~/.config/natshell/config.toml to customize:

python -c "from pathlib import Path; import natshell; p = Path(natshell.__file__).parent / 'config.default.toml'; print(p.read_text())" > ~/.config/natshell/config.toml

Or if installed from source, copy src/natshell/config.default.toml directly.

Sections

  • [model] — GGUF path, HuggingFace repo/file for auto-download, context size (0 = auto-detect from model), GPU layers, device selection
  • [remote] — URL, model name, API key for OpenAI-compatible endpoints
  • [ollama] — Ollama server URL and default model (used by /model list and /model use)
  • [agent] — max steps (15), temperature (0.3), max tokens (2048)
  • [safety] — mode, confirmation regex patterns, blocked regex patterns
  • [backup] — backup directory, max backups per file
  • [mcp] — MCP server safety mode
  • [ui] — theme (dark/light)

Environment Variables

  • NATSHELL_API_KEY — API key for remote inference (alternative to storing in config file)

Cross-Platform Support

Feature Linux macOS WSL
Shell execution bash bash bash
GPU Vulkan Metal Vulkan
Clipboard wl-copy, xclip, xsel pbcopy clip.exe
Package manager apt, dnf, pacman, zypper, apk, emerge brew apt
System context lscpu, free, ip, systemctl sw_vers, sysctl, vm_stat, ifconfig lscpu, free, ip
Safety patterns Linux + generic macOS-specific (brew, launchctl, diskutil) Linux + generic

Clipboard auto-detects the best backend with fallback to OSC52 terminal escape sequences for remote/VM sessions.

Architecture

src/natshell/
├── __main__.py              # CLI entry point, model download, engine wiring
├── app.py                   # Textual TUI application
├── backup.py                # Pre-edit backup system with undo support
├── commands.py              # Slash command dispatch (refactored from app.py)
├── config.py                # TOML config loading with env var support
├── config.default.toml      # Bundled default configuration
├── gpu.py                   # GPU detection (vulkaninfo/nvidia-smi/lspci)
├── headless.py              # Non-interactive single-shot CLI mode
├── mcp_server.py            # MCP server (JSON-RPC over stdin/stdout)
├── model_manager.py         # Model discovery, download, and switching
├── platform.py              # Platform detection (Linux/macOS/WSL)
├── plugins.py               # Plugin system for custom tools
├── session.py               # Conversation session persistence
├── agent/
│   ├── loop.py              # ReAct agent loop with safety checks
│   ├── system_prompt.py     # Platform-aware system prompt builder
│   ├── context.py           # System info gathering (CPU, RAM, disk, network, etc.)
│   ├── context_manager.py   # Conversation context window management
│   ├── plan.py              # Plan generation and markdown parsing
│   └── plan_executor.py     # Step-by-step plan execution engine
├── inference/
│   ├── engine.py            # Inference engine protocol + CompletionResult types
│   ├── local.py             # llama-cpp-python backend with GPU support
│   ├── remote.py            # OpenAI-compatible API backend (httpx)
│   └── ollama.py            # Ollama server discovery and model listing
├── safety/
│   └── classifier.py        # Regex-based command risk classifier
├── tools/
│   ├── registry.py          # Tool registration and dispatch
│   ├── execute_shell.py     # Shell execution with sudo, env filtering, truncation
│   ├── read_file.py         # File reading
│   ├── write_file.py        # File writing
│   ├── edit_file.py         # Targeted search-and-replace edits
│   ├── run_code.py          # Code execution in 10 languages
│   ├── list_directory.py    # Directory listing
│   ├── search_files.py      # Text/file search
│   ├── git_tool.py          # Structured git operations
│   ├── limits.py            # Context-aware output truncation limits
│   └── natshell_help.py     # Self-documentation by topic
└── ui/
    ├── widgets.py           # TUI widgets (messages, command blocks, modals)
    ├── commands.py          # Command palette providers
    ├── clipboard.py         # Cross-platform clipboard integration
    ├── escape.py            # Rich markup escaping utilities
    └── styles.tcss          # Textual CSS stylesheet

Development

source .venv/bin/activate
pytest                    # Run tests (669 tests)
ruff check src/ tests/    # Lint

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

natshell-0.1.25.tar.gz (150.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

natshell-0.1.25-py3-none-any.whl (112.1 kB view details)

Uploaded Python 3

File details

Details for the file natshell-0.1.25.tar.gz.

File metadata

  • Download URL: natshell-0.1.25.tar.gz
  • Upload date:
  • Size: 150.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for natshell-0.1.25.tar.gz
Algorithm Hash digest
SHA256 7348d5d6df3159e4512ab269e85fd7184a356fd93fc8c8a07c7049641b47fde1
MD5 5d527b54b552cf8f16ed1366c5e083d2
BLAKE2b-256 49dcdcbab1d17f7e0267429acff803ad93be22470ae3107fdf7bf3f7473902c3

See more details on using hashes here.

Provenance

The following attestation bundles were made for natshell-0.1.25.tar.gz:

Publisher: publish.yml on Barent/natshell

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file natshell-0.1.25-py3-none-any.whl.

File metadata

  • Download URL: natshell-0.1.25-py3-none-any.whl
  • Upload date:
  • Size: 112.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for natshell-0.1.25-py3-none-any.whl
Algorithm Hash digest
SHA256 1dc5861932bb962da1e21bdfb7c0d89a54794c46041dc5b262952f56810ad480
MD5 cab1bf735f32f12c810134aee55ca2f0
BLAKE2b-256 2c0a224c14436e9453d1d04e37a2cb3cd92e8329b9d8e8cb2e0097a6145dda16

See more details on using hashes here.

Provenance

The following attestation bundles were made for natshell-0.1.25-py3-none-any.whl:

Publisher: publish.yml on Barent/natshell

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page