Ultra-lightweight personal AI agent — shell, files, web, memory — powered by local or cloud LLMs
Project description
Agent Mini
Ultra-lightweight personal AI agent — inspired by nanobot, built lean.
- ~3,500 lines of Python (core agent)
- 4 LLM providers: Ollama, Gemini, GitHub Copilot, Local (any OpenAI-compatible)
- 1 chat channel: Telegram
- Built-in tools: shell, files, web search & browse, persistent memory
- Zero-framework: pure
httpx+asyncio— no LangChain, no LiteLLM - Optimized for small models — token-aware context, tool call repair, model-tier tuning
- Free web search — DuckDuckGo scraping, no API key required
- Streaming — real-time token output from all providers
- Session persistence — resume conversations across restarts
- Plugin system — extend with custom tools
- Vision support — send images to multi-modal models
- Token tracking — per-turn and per-session usage stats
Quick Start
1. Install
# From PyPI
pip install agent-mini
# Or with Telegram support
pip install agent-mini[all]
# Or from source (for development)
git clone https://github.com/mohsinkaleem/agent-mini.git
cd agent-mini
uv sync --extra all
2. Initialise
agent-mini init
This creates ~/.agent-mini/config.json — edit it to set your provider and keys.
3. Chat
# Interactive mode
agent-mini chat
# Single message
agent-mini chat -m "What's the weather in London?"
# Resume a previous session
agent-mini chat -s 20260307_143022
4. Gateway (Telegram)
agent-mini gateway
Providers
Set "provider" in config to one of these, then configure its section under "providers":
| Provider | Description | Streaming | Vision |
|---|---|---|---|
| Ollama | Local models via Ollama | ✅ | — |
| Gemini | Google Generative AI | ✅ | ✅ |
| GitHub Copilot | Copilot chat API (OAuth) | ✅ | ✅ |
| Local | Any OpenAI-compatible endpoint | ✅ | ✅ |
All providers support streaming responses and tool calling.
Ollama (default)
# Install & run Ollama: https://ollama.ai
ollama pull llama3.1
{
"provider": "ollama",
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "llama3.1",
"think": false
}
}
}
think controls Ollama thinking mode:
false(default): thinking offtrue: thinking on"low" | "medium" | "high": GPT-OSS thinking levels
References: Ollama thinking docs, Ollama blog post
Gemini
Get an API key at aistudio.google.com.
{
"provider": "gemini",
"providers": {
"gemini": {
"apiKey": "AIza...",
"model": "gemini-2.0-flash"
}
}
}
GitHub Copilot
Requires a GitHub account with Copilot access.
# Interactive OAuth login
agent-mini login github_copilot
{
"provider": "github_copilot",
"providers": {
"github_copilot": {
"model": "gpt-4o"
}
}
}
Local (LM Studio / vLLM / llama.cpp)
Any server that implements the OpenAI chat completions API.
{
"provider": "local",
"providers": {
"local": {
"baseUrl": "http://localhost:8080/v1",
"apiKey": "no-key",
"model": "my-model"
}
}
}
Chat Channels
Telegram
- Create a bot via @BotFather → copy the token
- Get your User ID (Settings → or send a message to @userinfobot)
- Configure:
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"],
"streamResponses": true
}
}
}
- Run:
agent-mini gateway
streamResponses: true enables real-time streamed Telegram replies (works with all providers).
Tools
The agent has these built-in tools (all available out of the box — no API keys needed):
| Tool | Description |
|---|---|
shell_exec |
Run any shell command |
read_file |
Read file contents |
append_file |
Append content to a file |
write_file |
Create/overwrite files |
code_edit |
Targeted find-and-replace (safer than write_file) |
list_directory |
Browse the filesystem |
search_files |
Search text/regex across files (uses rg or grep) |
web_search |
Search the web via DuckDuckGo (free, no API key) |
web_fetch |
Fetch & read any web page as plain text |
memory_store |
Save info to persistent memory |
memory_recall |
Fuzzy search persistent memory (TF-IDF) |
Web Search & Browsing
Web search uses DuckDuckGo HTML scraping — free with no API key needed. It works out of the box.
The agent can:
- Search —
web_search("Python asyncio tutorial")returns titles, URLs, and snippets - Read —
web_fetch("https://example.com/article")fetches a page and extracts readable text
Vision
For providers supporting images (Gemini, Copilot, Local with vision models), include image paths or URLs in your message:
Describe what you see in /path/to/screenshot.png
What's in this image? https://example.com/photo.jpg
Images are automatically detected, encoded, and sent to the model's multi-modal API.
Slash Commands
| Command | Description |
|---|---|
/clear |
Reset conversation |
/model <name> |
Switch provider/model (e.g. gemini/gemini-2.0-flash) |
/tools |
List available tools with descriptions |
/memory [query] |
Browse or search stored memories |
/status |
Show current config, token usage |
/save [file] |
Export conversation as Markdown |
/sessions |
List saved sessions |
/load <id> |
Resume a saved session |
/help |
Show all commands |
Multi-Line Input
Start with """ or ''' and end with the same delimiter:
"""
def hello():
print("world")
"""
Or use \ for line continuation.
Sandbox Levels
Control tool access with tools.sandboxLevel in config:
| Level | Description |
|---|---|
unrestricted |
All tools, all paths |
workspace |
All tools, paths restricted to workspace (default) |
readonly |
Read-only — no shell, write_file, append_file, or code_edit |
{
"tools": {
"sandboxLevel": "readonly"
}
}
Command Blocklist
Dangerous shell commands are blocked by default (rm -rf, sudo, mkfs, etc.). Add custom patterns:
{
"tools": {
"blockedCommands": ["\\bcurl\\b", "\\bwget\\b"]
}
}
Sessions
Conversations are automatically saved after each turn. Resume with:
# List saved sessions
agent-mini chat
/sessions
# Resume by ID
agent-mini chat -s 20260307_143022
# Or inside the REPL
/load 20260307_143022
Sessions are stored in ~/.agent-mini/sessions/.
Plugins
Extend the agent with custom tools by placing Python files in ~/.agent-mini/plugins/.
Each plugin file must export:
TOOL_DEF— an OpenAI function-calling tool definition dicthandler— an async (or sync) function that takesarguments: dictand returns a string
Example plugin (~/.agent-mini/plugins/timestamp.py):
TOOL_DEF = {
"type": "function",
"function": {
"name": "get_timestamp",
"description": "Get the current UTC timestamp.",
"parameters": {"type": "object", "properties": {}, "required": []},
},
}
async def handler(arguments: dict) -> str:
from datetime import datetime, timezone
return datetime.now(timezone.utc).isoformat()
Plugins are discovered at startup and available alongside built-in tools.
Token Tracking
Token usage is displayed after each response (when reported by the provider):
tokens: 1250→ 380← (1630 total)
Use /status to see cumulative session totals.
Configuration Reference
Full config with all options:
{
"provider": "ollama",
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "llama3.1",
"think": false
},
"gemini": {
"apiKey": "",
"model": "gemini-2.0-flash"
},
"github_copilot": {
"token": "",
"model": "gpt-4o"
},
"local": {
"baseUrl": "http://localhost:8080/v1",
"apiKey": "no-key",
"model": "local-model"
}
},
"agent": {
"maxIterations": 20,
"temperature": 0.7,
"systemPrompt": ""
},
"channels": {
"telegram": {
"enabled": false,
"token": "",
"allowFrom": [],
"streamResponses": true
}
},
"tools": {
"restrictToWorkspace": false,
"sandboxLevel": "workspace",
"blockedCommands": []
},
"memory": {
"enabled": true,
"maxEntries": 1000
},
"workspace": "~/.agent-mini/workspace"
}
Development
# Install dev dependencies
uv sync --extra dev
# Run tests
uv run pytest tests/ -v
# Lint
uv run ruff check src/ tests/
# Run a single test file
uv run pytest tests/test_loop.py -v
See CONTRIBUTING.md for guidelines on submitting PRs.
This is a lightweight pure-httpx approach: no browser process, no Playwright/Selenium, no headless Chrome. HTML is parsed via Python's built-in html.parser and converted to clean readable text.
Security
Set "restrictToWorkspace": true to sandbox file and shell operations to the workspace directory:
{
"tools": {
"restrictToWorkspace": true
}
}
CLI Reference
| Command | Description |
|---|---|
agent-mini init |
Create config and workspace |
agent-mini chat |
Interactive chat |
agent-mini chat -m "..." |
Single message |
agent-mini gateway |
Start Telegram gateway |
agent-mini login github_copilot |
OAuth login for GitHub Copilot |
agent-mini status |
Show config status |
Project Structure
agent-mini/
├── src/agent_mini/
│ ├── cli.py # CLI commands
│ ├── config.py # Config loading
│ ├── bus.py # Message routing
│ ├── sessions.py # Session persistence
│ ├── agent/
│ │ ├── loop.py # Core ReAct agent loop
│ │ ├── context.py # System prompt builder
│ │ ├── memory.py # Persistent JSON memory + TF-IDF search
│ │ ├── tools.py # Built-in tools + plugin loader
│ │ ├── token_estimator.py # Token counting + model tier classification
│ │ └── vision.py # Image detection + encoding
│ ├── providers/
│ │ ├── base.py # Provider interface + tool call repair
│ │ ├── ollama.py # Ollama
│ │ ├── gemini.py # Google Gemini
│ │ ├── github_copilot.py # GitHub Copilot
│ │ └── local.py # OpenAI-compatible
│ └── channels/
│ ├── base.py # Channel interface
│ └── telegram.py # Telegram bot
├── tests/
├── pyproject.toml
└── config.example.json
Configuration
Full config lives at ~/.agent-mini/config.json. See config.example.json for all options.
Key paths:
- Config:
~/.agent-mini/config.json - Workspace:
~/.agent-mini/workspace/ - Memory:
~/.agent-mini/memory.json
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_mini-0.1.0.tar.gz.
File metadata
- Download URL: agent_mini-0.1.0.tar.gz
- Upload date:
- Size: 82.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70d4bcdf94d618ac9e5902205ff5342010d840d7e96ea458f912c6a0db1f1238
|
|
| MD5 |
ba6cb7e636fac9dc14fc6b788aa2e2ef
|
|
| BLAKE2b-256 |
05a0629a47ceca9c8919030fa16f3479ffbdc4523b05f7aba31667af99b88c95
|
File details
Details for the file agent_mini-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agent_mini-0.1.0-py3-none-any.whl
- Upload date:
- Size: 47.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
16c1ffc6d4966aa20cb478b5ebe7a745e311abd09077014b417b402ad2e07d2c
|
|
| MD5 |
4381ed01a12929c72586fbdebc95ece5
|
|
| BLAKE2b-256 |
1afc8289ef15d8dc1bec56de44d3b4e32e9434f04f1191f22e9050c992b88f73
|