Skip to main content

AI Bot for Webex with thread context and MCP support

Project description

Webex Bot - AI Assistant

A conversational AI bot for Webex that:

  • 🤖 Responds to user mentions with AI-generated answers
  • 💬 Maintains conversation context within threads
  • 🔧 Integrates with MCP (Model Context Protocol) servers for extended capabilities
  • 🌐 Supports multiple LLM providers via LiteLLM (OpenAI, Ollama, OpenRouter, Anthropic, etc.)

Features

  • Thread-aware conversations: The bot remembers context within Webex threads, allowing natural follow-up questions
  • Smart mention handling: The bot recognizes its own name and doesn't confuse it with questions
  • Multiple LLM providers: Use OpenAI, Ollama (local/cloud), OpenRouter, Anthropic, or any LiteLLM-supported provider
  • MCP Integration: Connect to multiple MCP servers via HTTP for extended tool capabilities
  • Access control: Restrict bot to approved users, domains, or rooms
  • Clean code: Follows Python best practices with ruff linting and formatting

Quick Start

Prerequisites

  • Python 3.11+
  • UV package manager
  • Webex bot token (create one here)
  • API key for your LLM provider (e.g., OpenAI)

Installation

Option 1: Install from PyPI

Run the package directly from PyPI using UVX:

uvx webex-bot-ai

Option 2: Local Development Setup

For development or running from source:

  1. Install dependencies:
git clone https://github.com/mhajder/webex-bot-ai.git
cd webex-bot-ai
uv sync

Configuration

  1. Configure environment:
cp .env.example .env
# Edit .env with your configuration
  1. Set required variables in .env:
WEBEX_ACCESS_TOKEN=your_webex_bot_token
OPENAI_API_KEY=your_openai_api_key
  1. Run the bot:
webex-bot-ai

Configuration

Bot Settings

Variable Description Default
WEBEX_ACCESS_TOKEN Webex bot access token (required) -
BOT_NAME Bot name for mention handling Assistant
BOT_DISPLAY_NAME Display name in Webex AI Assistant

LLM Settings

Variable Description Default
LLM_MODEL LiteLLM model identifier gpt-4o-mini
LLM_TEMPERATURE Sampling temperature (0.0-2.0) 0.7
LLM_MAX_TOKENS Maximum response tokens 2048
LLM_API_BASE Custom API endpoint -

Model Examples

# OpenAI
LLM_MODEL=gpt-4o-mini
OPENAI_API_KEY=sk-...

# Ollama (local)
LLM_MODEL=ollama_chat/gpt-oss:120b
LLM_API_BASE=http://localhost:11434

# OpenRouter
LLM_MODEL=openrouter/meta-llama/llama-3.1-70b-instruct
OPENROUTER_API_KEY=sk-or-...

# Anthropic
LLM_MODEL=claude-3-sonnet-20240229
ANTHROPIC_API_KEY=sk-ant-...

Access Control

# Restrict to specific users
WEBEX_APPROVED_USERS=user1@example.com,user2@example.com

# Restrict to specific email domains
WEBEX_APPROVED_DOMAINS=example.com

# Restrict to specific rooms
WEBEX_APPROVED_ROOMS=room_id_1,room_id_2

MCP Integration

Connect to MCP HTTP transport servers for extended tool capabilities:

MCP_ENABLED=true
MCP_REQUEST_TIMEOUT=30

# Single server
MCP_SERVERS=[{"name": "my-server", "url": "http://localhost:8000/mcp", "enabled": true}]

# Multiple servers with auth
MCP_SERVERS=[
  {"name": "tools-server", "url": "http://localhost:8000/mcp", "enabled": true},
  {"name": "secure-server", "url": "https://api.example.com/mcp", "auth_token": "your-token", "enabled": true}
]

Sentry Error Tracking (Optional)

Enable error tracking and performance monitoring with Sentry:

# Install with Sentry support
uv sync --extra sentry

Configure Sentry via environment variables:

Variable Description Default
SENTRY_DSN Sentry DSN (enables Sentry when set) -
SENTRY_TRACES_SAMPLE_RATE Trace sampling rate (0.0-1.0) 1.0
SENTRY_SEND_DEFAULT_PII Include PII in events true
SENTRY_ENVIRONMENT Environment name (e.g., production) -
SENTRY_RELEASE Release/version identifier Package version
SENTRY_PROFILE_SESSION_SAMPLE_RATE Profile session sampling rate 1.0
SENTRY_PROFILE_LIFECYCLE Profile lifecycle mode trace
SENTRY_ENABLE_LOGS Enable logging integration true

Example configuration:

# Enable Sentry error tracking
SENTRY_DSN=https://your-key@o12345.ingest.us.sentry.io/6789
SENTRY_ENVIRONMENT=production

Usage

  1. Start a conversation: Mention the bot in a Webex space:

    @BotName What is AI?
    
  2. Follow-up in thread: Reply in the same thread for context-aware responses:

    @BotName Tell me more.
    
  3. The bot maintains context within the thread, so you can have natural conversations.

Development

Code Quality

# Lint code
uv run ruff check src/

# Format code
uv run ruff format src/

# Fix linting issues
uv run ruff check src/ --fix

Adding New Features

  • Commands: Add new commands in src/commands/
  • MCP Tools: Connect to MCP servers via configuration
  • LLM Providers: Configure via LLM_MODEL using LiteLLM syntax

Dependencies

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webex_bot_ai-0.2.0.tar.gz (157.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

webex_bot_ai-0.2.0-py3-none-any.whl (31.1 kB view details)

Uploaded Python 3

File details

Details for the file webex_bot_ai-0.2.0.tar.gz.

File metadata

  • Download URL: webex_bot_ai-0.2.0.tar.gz
  • Upload date:
  • Size: 157.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for webex_bot_ai-0.2.0.tar.gz
Algorithm Hash digest
SHA256 01cd196cb0ebc35e1bd5b220fac9a19508f38683114f5d6ab922084946050df2
MD5 3ef68e7d36989f20930985c908514820
BLAKE2b-256 d7a6ffc995a3d44cf26469bad121e463403a26ca08827950d20df12cfab862a7

See more details on using hashes here.

Provenance

The following attestation bundles were made for webex_bot_ai-0.2.0.tar.gz:

Publisher: publish.yml on mhajder/webex-bot-ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file webex_bot_ai-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: webex_bot_ai-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 31.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for webex_bot_ai-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 51609d3f27161edcec8dc6697e27f11baae865c8494a5d88a6de6740e7076b31
MD5 466985e14ea172837c274c6fe582d10f
BLAKE2b-256 b81c81bef854be8eb6cd2bb9b2f3d2344cb184584192af85dae47533948b8288

See more details on using hashes here.

Provenance

The following attestation bundles were made for webex_bot_ai-0.2.0-py3-none-any.whl:

Publisher: publish.yml on mhajder/webex-bot-ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page