A multi-model AI assistant framework powered by Google ADK
Project description
Multi-Model AI Assistant
๐ค ADKBot is a powerful, multi-model AI assistant framework built on Google's Agent Development Kit (ADK) with LiteLLM for universal model support. ADKBot is an ADK-native project, built from the ground up to leverage ADK's agent architecture while preserving and extending a rich tooling and channel ecosystem.
โก Use any LLM provider (NVIDIA NIM, Gemini, Groq, OpenRouter, Anthropic, OpenAI, xAI, Ollama, and 50+ more) through a single unified interface.
๐ Connect to 12+ chat platforms (Telegram, Discord, WhatsApp, Slack, WeChat, and more).
๐ ๏ธ Equipped with 10+ built-in tools (web search, file operations, shell commands, scheduled tasks, MCP support, and sub-agent spawning).
Key Features
๐ง ADK-Powered: Built on Google's Agent Development Kit for robust agent lifecycle management, native callbacks, and session handling.
๐ Multi-Model: LiteLLM integration means you can use Claude, GPT, Gemini, DeepSeek, Llama, and 50+ other models without changing code.
๐ง Rich Tooling: Web search (5 providers), file operations, shell execution, cron scheduling, MCP protocol support, and sub-agent spawning.
๐ฑ 12+ Chat Channels: Telegram, Discord, WhatsApp, WeChat, Feishu, DingTalk, Slack, Matrix, Email, QQ, WeCom, and Mochat.
โฐ Scheduled Tasks: Cron expressions, interval timers, and one-time scheduling with timezone support.
๐ Security: Workspace sandboxing, command safety guards, SSRF protection, and per-channel access control.
๐ Easy to Use: One command to set up, one command to chat.
Table of Contents
- Key Features
- Install
- Quick Start
- Chat Apps
- Configuration
- Multiple Instances
- CLI Reference
- Python SDK
- OpenAI-Compatible API
- Docker
- Linux Service
- Project Structure
- Contributing
๐ฆ Install
With uv (recommended, fast):
uv tool install adkbot
With pip:
pip install adkbot
Install from source (for development)
git clone https://github.com/nwokike/ADKbot.git
cd ADKbot
uv venv
# Windows: .venv\Scripts\activate
# macOS/Linux: source .venv/bin/activate
uv sync --all-extras
Install on Termux (Android)
Python packages with native dependencies can cause build issues inside raw Termux. Use proot-distro to run a proper Linux distribution inside Termux instead.
# Install proot-distro
pkg update && pkg upgrade
pkg install proot-distro
# Install and log into Ubuntu
proot-distro install ubuntu
proot-distro login ubuntu
# Inside Ubuntu: install uv
apt update && apt upgrade -y
apt install curl -y
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.profile
# Install ADKBot
uv tool install adkbot
Update to latest version
uv:
uv tool upgrade adkbot
adkbot --version
pip:
pip install -U adkbot
adkbot --version
Requirements
- Python >= 3.11
- An API key from any supported LLM provider
๐ Quick Start
[!TIP] Get API keys:
- NVIDIA NIM (recommended, completely free, massive open-weight model catalog)
- Google Gemini (free tier available, best ADK integration)
- Groq (fastest inference, free tier)
- OpenRouter (access to many models via one key)
- Anthropic (Claude Opus 4.6)
- OpenAI (GPT 5.4)
- xAI (Grok 4.20)
API keys can be set as environment variables (e.g.,
NVIDIA_NIM_API_KEY=nvapi-xxx) or entered during the wizard.For web search setup, see Web Search.
1. Initialize
adkbot onboard
This starts the interactive wizard by default. Use adkbot onboard --skip-wizard to create a basic config without the wizard.
2. Configure (~/.adkbot/config.json)
Configure your model using a LiteLLM model string:
{
"agents": {
"defaults": {
"model": "nvidia_nim/nvidia/nemotron-3-super-120b-a12b"
}
}
}
LiteLLM model strings work with 100+ providers. Examples:
"nvidia_nim/nvidia/nemotron-3-super-120b-a12b"- NVIDIA NIM (usesNVIDIA_NIM_API_KEY, free)"nvidia_nim/moonshotai/kimi-k2-instruct-0905"- Kimi K2 via NVIDIA NIM (free)"gemini/gemini-3.1-pro-preview"- Google Gemini (usesGEMINI_API_KEY)"groq/llama-3.3-70b-versatile"- Groq (usesGROQ_API_KEY)"anthropic/claude-opus-4-6"- Anthropic Claude (usesANTHROPIC_API_KEY)"openai/gpt-5.4"- OpenAI (usesOPENAI_API_KEY)"openrouter/anthropic/claude-opus-4-6"- OpenRouter gateway (usesOPENROUTER_API_KEY)"xai/grok-4.20-beta-0309-reasoning"- xAI Grok (usesGROK_API_KEY)"deepseek/deepseek-chat"- DeepSeek (usesDEEPSEEK_API_KEY)"ollama/llama3.2"- Local Ollama (no API key needed)
Set your API key as an environment variable (e.g., NVIDIA_NIM_API_KEY=nvapi-xxx) or enter it during the wizard.
Why NVIDIA NIM?
NVIDIA NIM is our recommended default for new users because:
- Completely free with no credit card required
- Hosts hundreds of top open-weight models (Nemotron, Llama 4, Kimi K2, Mistral, Gemma, and more)
- Runs on NVIDIA's own Hopper GPU infrastructure so inference is fast
- Works with LiteLLM out of the box using the
nvidia_nim/prefix
Popular NVIDIA NIM models:
| Model | String | Best for |
|---|---|---|
| Nemotron 3 Super 120B | nvidia_nim/nvidia/nemotron-3-super-120b-a12b |
General reasoning, coding |
| Kimi K2 Instruct | nvidia_nim/moonshotai/kimi-k2-instruct-0905 |
Long context, complex tasks |
| Llama 4 Scout 17B | nvidia_nim/meta/llama-4-scout-17b-16e-instruct |
Fast text generation |
| Gemma 4 27B | nvidia_nim/google/gemma-4-27b-it |
Lightweight general tasks |
Sign up at build.nvidia.com and grab your free API key.
Provider comparison at a glance
| Provider | Free tier | Speed | Model variety | Best for |
|---|---|---|---|---|
| NVIDIA NIM | Yes (completely free) | Fast | Hundreds of open-weight models | Default choice, coding, reasoning |
| Google Gemini | Yes (generous limits) | Fast | Gemini family only | Native ADK integration, huge context |
| Groq | Yes (rate limited) | Fastest | Llama, Mixtral | Low-latency chat |
| OpenRouter | No (pay per token) | Varies | 200+ models from all providers | Access to everything via one key |
| Anthropic | No | Medium | Claude family only | Complex writing, analysis |
| OpenAI | No | Medium | GPT family only | Broad compatibility |
| xAI | Limited | Fast | Grok family only | Reasoning, code |
| Ollama | Yes (local) | Hardware dependent | Any GGUF model | Privacy, offline use |
3. Chat
adkbot agent
That's it! You have a working AI assistant in 2 minutes.
๐ฌ Chat Apps
Connect ADKBot to your favorite chat platform. Want to build your own? See the Channel Plugin Guide.
| Channel | What you need |
|---|---|
| Telegram | Bot token from @BotFather |
| Discord | Bot token + Message Content intent |
QR code scan (adkbot channels login whatsapp) |
|
| WeChat (Weixin) | QR code scan (adkbot channels login weixin) |
| Feishu | App ID + App Secret |
| DingTalk | App Key + App Secret |
| Slack | Bot token + App-Level token |
| Matrix | Homeserver URL + Access token |
| IMAP/SMTP credentials | |
| App ID + App Secret | |
| WeCom | Bot ID + Bot Secret |
| Mochat | Claw token (auto-setup available) |
Telegram (Recommended)
1. Create a bot
- Open Telegram, search
@BotFather - Send
/newbot, follow prompts - Copy the token
2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
You can find your User ID in Telegram settings. Copy without the
@symbol.
3. Run
adkbot gateway
Discord
1. Create a bot
- Go to https://discord.com/developers/applications
- Create an application โ Bot โ Add Bot
- Copy the bot token
2. Enable intents
- In Bot settings, enable MESSAGE CONTENT INTENT
3. Get your User ID
- Discord Settings โ Advanced โ enable Developer Mode
- Right-click your avatar โ Copy User ID
4. Configure
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"],
"groupPolicy": "mention"
}
}
}
groupPolicy:"mention"(default โ respond when @mentioned),"open"(respond to all messages).
5. Invite the bot
- OAuth2 โ URL Generator
- Scopes:
bot - Bot Permissions:
Send Messages,Read Message History - Open the generated invite URL and add the bot to your server
6. Run
adkbot gateway
Requires Node.js โฅ18.
1. Link device
adkbot channels login whatsapp
# Scan QR with WhatsApp โ Settings โ Linked Devices
2. Configure
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}
3. Run
adkbot gateway
Matrix (Element)
Install Matrix dependencies first:
pip install "adkbot[matrix]"
1. Create/choose a Matrix account
Create or reuse a Matrix account on your homeserver (for example matrix.org).
2. Get credentials
You need:
userId(example:@adkbot:matrix.org)accessTokendeviceId(recommended so sync tokens can be restored across restarts)
3. Configure
{
"channels": {
"matrix": {
"enabled": true,
"homeserver": "https://matrix.org",
"userId": "@adkbot:matrix.org",
"accessToken": "syt_xxx",
"deviceId": "ADKBOT01",
"e2eeEnabled": true,
"allowFrom": ["@your_user:matrix.org"],
"groupPolicy": "open"
}
}
}
| Option | Description |
|---|---|
allowFrom |
User IDs allowed to interact. Empty denies all; use ["*"] to allow everyone. |
groupPolicy |
open (default), mention, or allowlist. |
e2eeEnabled |
E2EE support (default true). Set false for plaintext-only. |
4. Run
adkbot gateway
Mochat (Claw IM)
Uses Socket.IO WebSocket by default, with HTTP polling fallback.
1. Ask ADKBot to set up Mochat for you
Simply send this message to ADKBot:
Read https://raw.githubusercontent.com/nwokike/MoChat/refs/heads/main/skills/adkbot/skill.md and register on MoChat. My Email account is onyeka@kiri.ng Bind me as your owner and DM me on MoChat.
2. Restart gateway
adkbot gateway
Manual configuration (advanced)
{
"channels": {
"mochat": {
"enabled": true,
"base_url": "https://mochat.io",
"socket_url": "https://mochat.io",
"socket_path": "/socket.io",
"claw_token": "claw_xxx",
"agent_user_id": "6982abcdef",
"sessions": ["*"],
"panels": ["*"]
}
}
}
Feishu
Uses WebSocket long connection โ no public IP required.
1. Create a Feishu bot
- Visit Feishu Open Platform
- Create a new app โ Enable Bot capability
- Permissions:
im:message,im:message.p2p_msg:readonly,cardkit:card:write - Events: Add
im.message.receive_v1โ Select Long Connection mode - Get App ID and App Secret
- Publish the app
2. Configure
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "xxx",
"allowFrom": ["ou_YOUR_OPEN_ID"],
"groupPolicy": "mention",
"streaming": true
}
}
}
3. Run
adkbot gateway
DingTalk (้้)
Uses Stream Mode โ no public IP required.
1. Create a DingTalk bot
- Visit DingTalk Open Platform
- Create a new app โ Add Robot capability โ Toggle Stream Mode ON
- Get AppKey and AppSecret
2. Configure
{
"channels": {
"dingtalk": {
"enabled": true,
"clientId": "YOUR_APP_KEY",
"clientSecret": "YOUR_APP_SECRET",
"allowFrom": ["YOUR_STAFF_ID"]
}
}
}
3. Run
adkbot gateway
Slack
Uses Socket Mode โ no public URL required.
1. Create a Slack app
- Go to Slack API โ Create New App โ "From scratch"
2. Configure the app
- Socket Mode: Toggle ON โ Generate an App-Level Token with
connections:writescope โ copy it (xapp-...) - OAuth & Permissions: Add bot scopes:
chat:write,reactions:write,app_mentions:read - Event Subscriptions: Toggle ON โ Subscribe to:
message.im,message.channels,app_mention - App Home: Enable Messages Tab โ Check "Allow users to send Slash commands and messages from the messages tab"
- Install App: Click Install to Workspace โ copy the Bot Token (
xoxb-...)
3. Configure ADKBot
{
"channels": {
"slack": {
"enabled": true,
"botToken": "xoxb-...",
"appToken": "xapp-...",
"allowFrom": ["YOUR_SLACK_USER_ID"],
"groupPolicy": "mention"
}
}
}
4. Run
adkbot gateway
Give ADKBot its own email account. It polls IMAP for incoming mail and replies via SMTP.
1. Get credentials (Gmail example)
- Create a dedicated Gmail account (e.g.
my-adkbot@gmail.com) - Enable 2-Step Verification โ Create an App Password
2. Configure
{
"channels": {
"email": {
"enabled": true,
"consentGranted": true,
"imapHost": "imap.gmail.com",
"imapPort": 993,
"imapUsername": "my-adkbot@gmail.com",
"imapPassword": "your-app-password",
"smtpHost": "smtp.gmail.com",
"smtpPort": 587,
"smtpUsername": "my-adkbot@gmail.com",
"smtpPassword": "your-app-password",
"fromAddress": "my-adkbot@gmail.com",
"allowFrom": ["your-real-email@gmail.com"]
}
}
}
3. Run
adkbot gateway
QQ (QQๅ่)
Uses botpy SDK with WebSocket โ no public IP required. Currently supports private messages only.
1. Register & create bot
- Visit QQ Open Platform โ Create a new bot application
- Copy AppID and AppSecret
2. Configure
{
"channels": {
"qq": {
"enabled": true,
"appId": "YOUR_APP_ID",
"secret": "YOUR_APP_SECRET",
"allowFrom": ["YOUR_OPENID"]
}
}
}
3. Run
adkbot gateway
WeChat (ๅพฎไฟก / Weixin)
Uses HTTP long-poll with QR-code login.
1. Install with WeChat support
pip install "adkbot[weixin]"
2. Configure
{
"channels": {
"weixin": {
"enabled": true,
"allowFrom": ["YOUR_WECHAT_USER_ID"]
}
}
}
3. Login
adkbot channels login weixin
4. Run
adkbot gateway
WeCom (ไผไธๅพฎไฟก)
Uses WebSocket long connection โ no public IP required.
1. Install
pip install adkbot[wecom]
2. Configure
{
"channels": {
"wecom": {
"enabled": true,
"botId": "your_bot_id",
"secret": "your_bot_secret",
"allowFrom": ["your_id"]
}
}
}
3. Run
adkbot gateway
โ๏ธ Configuration
Config file: ~/.adkbot/config.json
Model Configuration
ADKBot uses LiteLLM under the hood, which means it supports 100+ LLM providers through a unified interface. Simply specify the model using a LiteLLM model string.
[!TIP]
- Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
- For local models, use
ollamaorvllmmodel strings.
| Provider | LiteLLM Model String | API Key Environment Variable |
|---|---|---|
| Google Gemini | gemini/gemini-3.1-pro-preview |
GEMINI_API_KEY |
| NVIDIA NIM | nvidia_nim/nvidia/nemotron-3-super-120b-a12b |
NVIDIA_NIM_API_KEY |
| Groq | groq/llama-3.3-70b-versatile |
GROQ_API_KEY |
| Anthropic Claude | anthropic/claude-opus-4-6 |
ANTHROPIC_API_KEY |
| OpenAI | openai/gpt-5.4 |
OPENAI_API_KEY |
| OpenRouter | openrouter/anthropic/claude-opus-4-6 |
OPENROUTER_API_KEY |
| xAI (Grok) | xai/grok-4.20-beta-0309-reasoning |
GROK_API_KEY |
| DeepSeek | deepseek/deepseek-chat |
DEEPSEEK_API_KEY |
| Ollama (local) | ollama/llama3.2 |
None |
| vLLM (local) | openai/meta-llama/Llama-3.1-8B-Instruct + apiBase |
Any (e.g., dummy) |
Gemini is a first-class citizen. ADKBot uses Google ADK natively, so Gemini models get the best possible integration.
LiteLLM Model String Format
LiteLLM uses the format provider/model-name or just model-name for native providers:
{
"agents": {
"defaults": {
"model": "gemini/gemini-3.1-pro-preview",
"apiKey": "",
"apiBase": null
}
}
}
- model: LiteLLM model string (e.g.,
gemini/gemini-3.1-pro-preview,nvidia_nim/nvidia/nemotron-3-super-120b-a12b) - apiKey: Optional API key. If empty, uses environment variables
- apiBase: Optional custom API base URL (for self-hosted endpoints)
Examples:
"gemini/gemini-3.1-pro-preview"- Google Gemini (usesGEMINI_API_KEY)"nvidia_nim/nvidia/nemotron-3-super-120b-a12b"- NVIDIA NIM (usesNVIDIA_NIM_API_KEY)"anthropic/claude-opus-4-6"- Anthropic Claude (usesANTHROPIC_API_KEY)"ollama/llama3.2"- Local Ollama (no API key needed)
For 100+ providers, see: https://docs.litellm.ai/docs/providers
Ollama (local)
Run a local model with Ollama:
1. Start Ollama:
ollama run llama3.2
2. Add to config:
{
"agents": {
"defaults": {
"model": "ollama/llama3.2"
}
}
}
vLLM (local / OpenAI-compatible)
Run your own model with vLLM or any OpenAI-compatible server:
1. Start the server:
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
2. Add to config:
{
"agents": {
"defaults": {
"model": "openai/meta-llama/Llama-3.1-8B-Instruct",
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
}
}
For local servers that don't require a key, set
apiKeyto any non-empty string (e.g.,"dummy").
Custom/OpenAI-compatible Endpoint
Connect to any OpenAI-compatible endpoint (LM Studio, llama.cpp, Together AI, Fireworks):
{
"agents": {
"defaults": {
"model": "openai/your-model-name",
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
}
}
Channel Settings
Global settings that apply to all channels:
{
"channels": {
"sendProgress": true,
"sendToolHints": false,
"sendMaxRetries": 3,
"telegram": { "..." : "..." }
}
}
| Setting | Default | Description |
|---|---|---|
sendProgress |
true |
Stream agent's text progress to the channel |
sendToolHints |
false |
Stream tool-call hints (e.g. read_file("โฆ")) |
sendMaxRetries |
3 |
Max delivery attempts per outbound message |
Web Search
[!TIP] Use
proxyintools.webto route all web requests through a proxy:{ "tools": { "web": { "proxy": "http://127.0.0.1:7890" } } }
ADKBot supports multiple web search providers. Configure in ~/.adkbot/config.json under tools.web.search.
| Provider | Config fields | Env var fallback | Free |
|---|---|---|---|
brave (default) |
apiKey |
BRAVE_API_KEY |
No |
tavily |
apiKey |
TAVILY_API_KEY |
No |
jina |
apiKey |
JINA_API_KEY |
Free tier (10M tokens) |
searxng |
baseUrl |
SEARXNG_BASE_URL |
Yes (self-hosted) |
duckduckgo |
โ | โ | Yes |
When credentials are missing, ADKBot automatically falls back to DuckDuckGo.
Search provider examples
Brave (default):
{
"tools": { "web": { "search": { "provider": "brave", "apiKey": "BSA..." } } }
}
Tavily:
{
"tools": { "web": { "search": { "provider": "tavily", "apiKey": "tvly-..." } } }
}
DuckDuckGo (zero config):
{
"tools": { "web": { "search": { "provider": "duckduckgo" } } }
}
MCP (Model Context Protocol)
[!TIP] The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
ADKBot supports MCP โ connect external tool servers and use them as native agent tools.
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
},
"my-remote-mcp": {
"url": "https://example.com/mcp/",
"headers": { "Authorization": "Bearer xxxxx" }
}
}
}
}
| Mode | Config | Example |
|---|---|---|
| Stdio | command + args |
Local process via npx / uvx |
| HTTP | url + headers (optional) |
Remote endpoint |
MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools โ no extra configuration needed.
Security
[!TIP] For production deployments, set
"restrictToWorkspace": trueto sandbox the agent.
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace |
false |
Restricts all tools to the workspace directory |
tools.exec.enable |
true |
When false, disables shell command execution entirely |
tools.exec.pathAppend |
"" |
Extra directories to append to PATH for shell commands |
channels.*.allowFrom |
[] (deny all) |
Whitelist of user IDs. Use ["*"] to allow everyone |
Timezone
By default, ADKBot uses UTC. Set agents.defaults.timezone to your local timezone:
{
"agents": {
"defaults": {
"timezone": "Asia/Shanghai"
}
}
}
This affects runtime time context, cron schedule defaults, and one-shot at times.
Common examples: UTC, America/New_York, America/Los_Angeles, Europe/London, Asia/Tokyo, Asia/Shanghai.
๐งฉ Multiple Instances
Run multiple ADKBot instances simultaneously with separate configs and runtime data.
Quick Start
# Create separate instance configs
adkbot onboard --config ~/.adkbot-telegram/config.json --workspace ~/.adkbot-telegram/workspace
adkbot onboard --config ~/.adkbot-discord/config.json --workspace ~/.adkbot-discord/workspace
Run instances:
# Instance A - Telegram bot
adkbot gateway --config ~/.adkbot-telegram/config.json
# Instance B - Discord bot
adkbot gateway --config ~/.adkbot-discord/config.json
Path Resolution
| Component | Resolved From | Example |
|---|---|---|
| Config | --config path |
~/.adkbot-A/config.json |
| Workspace | --workspace or config |
~/.adkbot-A/workspace/ |
| Cron Jobs | config directory | ~/.adkbot-A/cron/ |
| Media / state | config directory | ~/.adkbot-A/media/ |
Notes
- Each instance must use a different port if they run concurrently
- Use a different workspace per instance for isolated memory and sessions
--workspaceoverrides the workspace defined in the config file
๐ป CLI Reference
View Common Commands
| Command | Description |
|---|---|
adkbot onboard |
Initialize config & workspace at ~/.adkbot/ |
adkbot onboard --wizard |
Launch the interactive onboarding wizard |
adkbot agent -m "..." |
Chat with the agent |
adkbot agent |
Interactive chat mode |
adkbot gateway |
Start the gateway (connects to chat channels) |
adkbot status |
Show status |
adkbot channels login <channel> |
Authenticate a channel interactively |
Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.
For a full list of commands and options, see the Comprehensive CLI Reference.
Heartbeat (Periodic Tasks)
The gateway wakes up every 30 minutes and checks HEARTBEAT.md in your workspace (~/.adkbot/workspace/HEARTBEAT.md). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel.
Setup: edit ~/.adkbot/workspace/HEARTBEAT.md:
## Periodic Tasks
- [ ] Check weather forecast and send a summary
- [ ] Scan inbox for urgent emails
The agent can also manage this file itself โ ask it to "add a periodic task" and it will update HEARTBEAT.md for you.
Note: The gateway must be running (
adkbot gateway) and you must have chatted with the bot at least once.
๐ Python SDK
Use ADKBot as a library โ no CLI, no gateway, just Python:
from adkbot import AdkBot
bot = AdkBot.from_config()
result = await bot.run("Summarize the README")
print(result.content)
Each call carries a session_id for conversation isolation โ different IDs get independent history:
await bot.run("hi", session_id="user-alice")
await bot.run("hi", session_id="task-42")
ADKBot uses ADK's native callback system for lifecycle hooks:
# Callbacks are configured via the Agent's before/after hooks
# See adkbot/agent/callbacks.py for the full callback API
๐ OpenAI-Compatible API
ADKBot can expose a minimal OpenAI-compatible endpoint for local integrations:
pip install "adkbot[api]"
adkbot serve
By default, the API binds to 127.0.0.1:8900.
Endpoints
GET /healthGET /v1/modelsPOST /v1/chat/completions
curl
curl http://127.0.0.1:8900/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "hi"}],
"session_id": "my-session"
}'
Python (openai)
from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:8900/v1",
api_key="dummy",
)
resp = client.chat.completions.create(
model="adkbot",
messages=[{"role": "user", "content": "hi"}],
extra_body={"session_id": "my-session"},
)
print(resp.choices[0].message.content)
๐ณ Docker
[!TIP] The
-v ~/.adkbot:/root/.adkbotflag mounts your local config directory into the container for persistence.
Docker Compose
docker compose run --rm adkbot-cli onboard # first-time setup
vim ~/.adkbot/config.json # add API keys
docker compose up -d adkbot-gateway # start gateway
docker compose run --rm adkbot-cli agent -m "Hello!" # run CLI
docker compose logs -f adkbot-gateway # view logs
docker compose down # stop
Docker
# Build the image
docker build -t adkbot .
# Initialize config (first time only)
docker run -v ~/.adkbot:/root/.adkbot --rm adkbot onboard
# Edit config on host to add API keys
vim ~/.adkbot/config.json
# Run gateway
docker run -v ~/.adkbot:/root/.adkbot -p 18790:18790 adkbot gateway
# Or run a single command
docker run -v ~/.adkbot:/root/.adkbot --rm adkbot agent -m "Hello!"
๐ง Linux Service
You can automatically run the gateway in the background on system boot using the built-in systemd installer.
1. Install and start the system service:
adkbot install-service
Note: To keep the gateway running after you log out of SSH, enable user lingering:
loginctl enable-linger $USER
Common operations:
systemctl --user status adkbot-gateway # check status
systemctl --user restart adkbot-gateway # restart after config changes
journalctl --user -u adkbot-gateway -f # follow logs
๐ Project Structure
adkbot/
โโโ agent/ # ๐ง Core agent (ADK Agent + Runner)
โ โโโ callbacks.py# ADK lifecycle callbacks
โ โโโ context.py # Prompt builder
โ โโโ memory.py # Persistent memory
โ โโโ skills.py # Skills loader
โ โโโ subagent.py # Background task execution
โ โโโ tools/ # Built-in tools (10+ tools)
โโโ adkbot.py # ๐ค Main AdkBot class (ADK Agent + Runner + LiteLLM)
โโโ skills/ # ๐ฏ Bundled skills
โโโ channels/ # ๐ฑ Chat channel integrations (12+ channels)
โโโ bus/ # ๐ Message routing
โโโ cron/ # โฐ Scheduled tasks
โโโ heartbeat/ # ๐ Proactive wake-up
โโโ session/ # ๐ฌ Conversation sessions
โโโ config/ # โ๏ธ Configuration
โโโ security/ # ๐ Safety guards & SSRF protection
โโโ cli/ # ๐ฅ๏ธ CLI commands
๐ค Contributing
PRs welcome! The codebase is intentionally readable and well-structured. ๐ค
Roadmap:
- Multi-modal โ See and hear (images, voice, video)
- Long-term memory โ Never forget important context
- Better reasoning โ Multi-step planning and reflection
- More integrations โ Calendar, GitHub, and more
- ADK Web UI โ Built-in web interface via
adk web - Self-improvement โ Learn from feedback and mistakes
By Kiri Research Labs
Inspired by OpenClaw
ADKBot is for educational, research, and technical exchange purposes only
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file adkbot-0.1.0.tar.gz.
File metadata
- Download URL: adkbot-0.1.0.tar.gz
- Upload date:
- Size: 216.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
72246eb886a1b6d332a198b09a1cb3b679972fd80c4ea8517482506bd01aa080
|
|
| MD5 |
33b045b94252464362e059e6eb52232c
|
|
| BLAKE2b-256 |
c98720fc349f4fd32d6b8c6f453c65bc29408bdabb10a9bdf1ad0052d830f0ce
|
Provenance
The following attestation bundles were made for adkbot-0.1.0.tar.gz:
Publisher:
publish.yml on Nwokike/ADKbot
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
adkbot-0.1.0.tar.gz -
Subject digest:
72246eb886a1b6d332a198b09a1cb3b679972fd80c4ea8517482506bd01aa080 - Sigstore transparency entry: 1262895580
- Sigstore integration time:
-
Permalink:
Nwokike/ADKbot@ab99e8ac05b16f16dd61cdd62016ad19b32400a2 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/Nwokike
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ab99e8ac05b16f16dd61cdd62016ad19b32400a2 -
Trigger Event:
release
-
Statement type:
File details
Details for the file adkbot-0.1.0-py3-none-any.whl.
File metadata
- Download URL: adkbot-0.1.0-py3-none-any.whl
- Upload date:
- Size: 249.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
14c83e429c40b68abedc829a7b0e351aa9a29346b95e1df210858d6835c4b0f5
|
|
| MD5 |
eb92c770443cb9206f5575091e6b302f
|
|
| BLAKE2b-256 |
a840e7143b2c3229e34c8f23f65bf08bae6c7ea4e8fd6dadee237e706e3dd063
|
Provenance
The following attestation bundles were made for adkbot-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on Nwokike/ADKbot
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
adkbot-0.1.0-py3-none-any.whl -
Subject digest:
14c83e429c40b68abedc829a7b0e351aa9a29346b95e1df210858d6835c4b0f5 - Sigstore transparency entry: 1262895591
- Sigstore integration time:
-
Permalink:
Nwokike/ADKbot@ab99e8ac05b16f16dd61cdd62016ad19b32400a2 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/Nwokike
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ab99e8ac05b16f16dd61cdd62016ad19b32400a2 -
Trigger Event:
release
-
Statement type: