AI Bot for Webex with thread context and MCP support
Project description
Webex Bot - AI Assistant
A conversational AI bot for Webex that:
- 🤖 Responds to user mentions with AI-generated answers
- 💬 Maintains conversation context within threads
- 🔧 Integrates with MCP (Model Context Protocol) servers for extended capabilities
- 🌐 Supports multiple LLM providers via LiteLLM (OpenAI, Ollama, OpenRouter, Anthropic, etc.)
Features
- Thread-aware conversations: The bot remembers context within Webex threads, allowing natural follow-up questions
- Smart mention handling: The bot recognizes its own name and doesn't confuse it with questions
- Multiple LLM providers: Use OpenAI, Ollama (local/cloud), OpenRouter, Anthropic, or any LiteLLM-supported provider
- MCP Integration: Connect to multiple MCP servers via HTTP for extended tool capabilities
- Access control: Restrict bot to approved users, domains, or rooms
- Clean code: Follows Python best practices with ruff linting and formatting
Quick Start
Prerequisites
- Python 3.11+
- UV package manager
- Webex bot token (create one here)
- API key for your LLM provider (e.g., OpenAI)
Installation
Option 1: Install from PyPI
Run the package directly from PyPI using UVX:
uvx webex-bot-ai
Option 2: Local Development Setup
For development or running from source:
- Install dependencies:
git clone https://github.com/mhajder/webex-bot-ai.git
cd webex-bot-ai
uv sync
Configuration
- Configure environment:
cp .env.example .env
# Edit .env with your configuration
- Set required variables in
.env:
WEBEX_ACCESS_TOKEN=your_webex_bot_token
OPENAI_API_KEY=your_openai_api_key
- Run the bot:
webex-bot-ai
Configuration
Bot Settings
| Variable | Description | Default |
|---|---|---|
WEBEX_ACCESS_TOKEN |
Webex bot access token (required) | - |
BOT_NAME |
Bot name for mention handling | Assistant |
BOT_DISPLAY_NAME |
Display name in Webex | AI Assistant |
LLM Settings
| Variable | Description | Default |
|---|---|---|
LLM_MODEL |
LiteLLM model identifier | gpt-4o-mini |
LLM_TEMPERATURE |
Sampling temperature (0.0-2.0) | 0.7 |
LLM_MAX_TOKENS |
Maximum response tokens | 2048 |
LLM_API_BASE |
Custom API endpoint | - |
Model Examples
# OpenAI
LLM_MODEL=gpt-4o-mini
OPENAI_API_KEY=sk-...
# Ollama (local)
LLM_MODEL=ollama_chat/gpt-oss:120b
LLM_API_BASE=http://localhost:11434
# OpenRouter
LLM_MODEL=openrouter/meta-llama/llama-3.1-70b-instruct
OPENROUTER_API_KEY=sk-or-...
# Anthropic
LLM_MODEL=claude-3-sonnet-20240229
ANTHROPIC_API_KEY=sk-ant-...
Access Control
# Restrict to specific users
WEBEX_APPROVED_USERS=user1@example.com,user2@example.com
# Restrict to specific email domains
WEBEX_APPROVED_DOMAINS=example.com
# Restrict to specific rooms
WEBEX_APPROVED_ROOMS=room_id_1,room_id_2
MCP Integration
Connect to MCP HTTP transport servers for extended tool capabilities:
MCP_ENABLED=true
MCP_REQUEST_TIMEOUT=30
# Single server
MCP_SERVERS=[{"name": "my-server", "url": "http://localhost:8000/mcp", "enabled": true}]
# Multiple servers with auth
MCP_SERVERS=[
{"name": "tools-server", "url": "http://localhost:8000/mcp", "enabled": true},
{"name": "secure-server", "url": "https://api.example.com/mcp", "auth_token": "your-token", "enabled": true}
]
Sentry Error Tracking (Optional)
Enable error tracking and performance monitoring with Sentry:
# Install with Sentry support
uv sync --extra sentry
Configure Sentry via environment variables:
| Variable | Description | Default |
|---|---|---|
SENTRY_DSN |
Sentry DSN (enables Sentry when set) | - |
SENTRY_TRACES_SAMPLE_RATE |
Trace sampling rate (0.0-1.0) | 1.0 |
SENTRY_SEND_DEFAULT_PII |
Include PII in events | true |
SENTRY_ENVIRONMENT |
Environment name (e.g., production) |
- |
SENTRY_RELEASE |
Release/version identifier | Package version |
SENTRY_PROFILE_SESSION_SAMPLE_RATE |
Profile session sampling rate | 1.0 |
SENTRY_PROFILE_LIFECYCLE |
Profile lifecycle mode | trace |
SENTRY_ENABLE_LOGS |
Enable logging integration | true |
Example configuration:
# Enable Sentry error tracking
SENTRY_DSN=https://your-key@o12345.ingest.us.sentry.io/6789
SENTRY_ENVIRONMENT=production
Usage
-
Start a conversation: Mention the bot in a Webex space:
@BotName What is AI? -
Follow-up in thread: Reply in the same thread for context-aware responses:
@BotName Tell me more. -
The bot maintains context within the thread, so you can have natural conversations.
Development
Code Quality
# Lint code
uv run ruff check src/
# Format code
uv run ruff format src/
# Fix linting issues
uv run ruff check src/ --fix
Adding New Features
- Commands: Add new commands in
src/commands/ - MCP Tools: Connect to MCP servers via configuration
- LLM Providers: Configure via
LLM_MODELusing LiteLLM syntax
Dependencies
- webex_bot - Webex bot framework
- litellm - Universal LLM API
- fastmcp - MCP client/server framework
- pydantic-settings - Configuration management
- sentry-sdk - Error tracking (optional)
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file webex_bot_ai-0.2.0.tar.gz.
File metadata
- Download URL: webex_bot_ai-0.2.0.tar.gz
- Upload date:
- Size: 157.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01cd196cb0ebc35e1bd5b220fac9a19508f38683114f5d6ab922084946050df2
|
|
| MD5 |
3ef68e7d36989f20930985c908514820
|
|
| BLAKE2b-256 |
d7a6ffc995a3d44cf26469bad121e463403a26ca08827950d20df12cfab862a7
|
Provenance
The following attestation bundles were made for webex_bot_ai-0.2.0.tar.gz:
Publisher:
publish.yml on mhajder/webex-bot-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
webex_bot_ai-0.2.0.tar.gz -
Subject digest:
01cd196cb0ebc35e1bd5b220fac9a19508f38683114f5d6ab922084946050df2 - Sigstore transparency entry: 829119940
- Sigstore integration time:
-
Permalink:
mhajder/webex-bot-ai@dbdcf244c16f717509bd0000cfcc7560c6dd5f6a -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/mhajder
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@dbdcf244c16f717509bd0000cfcc7560c6dd5f6a -
Trigger Event:
push
-
Statement type:
File details
Details for the file webex_bot_ai-0.2.0-py3-none-any.whl.
File metadata
- Download URL: webex_bot_ai-0.2.0-py3-none-any.whl
- Upload date:
- Size: 31.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
51609d3f27161edcec8dc6697e27f11baae865c8494a5d88a6de6740e7076b31
|
|
| MD5 |
466985e14ea172837c274c6fe582d10f
|
|
| BLAKE2b-256 |
b81c81bef854be8eb6cd2bb9b2f3d2344cb184584192af85dae47533948b8288
|
Provenance
The following attestation bundles were made for webex_bot_ai-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on mhajder/webex-bot-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
webex_bot_ai-0.2.0-py3-none-any.whl -
Subject digest:
51609d3f27161edcec8dc6697e27f11baae865c8494a5d88a6de6740e7076b31 - Sigstore transparency entry: 829119957
- Sigstore integration time:
-
Permalink:
mhajder/webex-bot-ai@dbdcf244c16f717509bd0000cfcc7560c6dd5f6a -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/mhajder
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@dbdcf244c16f717509bd0000cfcc7560c6dd5f6a -
Trigger Event:
push
-
Statement type: