Skip to main content

AI Bot for Webex with thread context and MCP support

Project description

Webex Bot - AI Assistant

A conversational AI bot for Webex that:

  • 🤖 Responds to user mentions with AI-generated answers
  • 💬 Maintains conversation context within threads
  • 🔧 Integrates with MCP (Model Context Protocol) servers for extended capabilities
  • 🌐 Supports multiple LLM providers via LiteLLM (OpenAI, Ollama, OpenRouter, Anthropic, etc.)

Features

  • Thread-aware conversations: The bot remembers context within Webex threads, allowing natural follow-up questions
  • Smart mention handling: The bot recognizes its own name and doesn't confuse it with questions
  • Multiple LLM providers: Use OpenAI, Ollama (local/cloud), OpenRouter, Anthropic, or any LiteLLM-supported provider
  • MCP Integration: Connect to multiple MCP servers via HTTP for extended tool capabilities
  • Access control: Restrict bot to approved users, domains, or rooms
  • Clean code: Follows Python best practices with ruff linting and formatting

Quick Start

Prerequisites

  • Python 3.11+
  • UV package manager
  • Webex bot token (create one here)
  • API key for your LLM provider (e.g., OpenAI)

Installation

Option 1: Install from PyPI

Run the package directly from PyPI using UVX:

uvx webex-bot-ai

Option 2: Local Development Setup

For development or running from source:

  1. Install dependencies:
git clone https://github.com/mhajder/webex-bot-ai.git
cd webex-bot-ai
uv sync

Configuration

  1. Configure environment:
cp .env.example .env
# Edit .env with your configuration
  1. Set required variables in .env:
WEBEX_ACCESS_TOKEN=your_webex_bot_token
OPENAI_API_KEY=your_openai_api_key
  1. Run the bot:
webex-bot-ai

Configuration

Bot Settings

Variable Description Default
WEBEX_ACCESS_TOKEN Webex bot access token (required) -
BOT_NAME Bot name for mention handling Assistant
BOT_DISPLAY_NAME Display name in Webex AI Assistant

LLM Settings

Variable Description Default
LLM_MODEL LiteLLM model identifier gpt-4o-mini
LLM_TEMPERATURE Sampling temperature (0.0-2.0) 0.7
LLM_MAX_TOKENS Maximum response tokens 2048
LLM_API_BASE Custom API endpoint -

Model Examples

# OpenAI
LLM_MODEL=gpt-4o-mini
OPENAI_API_KEY=sk-...

# Ollama (local)
LLM_MODEL=ollama_chat/gpt-oss:120b
LLM_API_BASE=http://localhost:11434

# OpenRouter
LLM_MODEL=openrouter/meta-llama/llama-3.1-70b-instruct
OPENROUTER_API_KEY=sk-or-...

# Anthropic
LLM_MODEL=claude-3-sonnet-20240229
ANTHROPIC_API_KEY=sk-ant-...

Access Control

# Restrict to specific users
WEBEX_APPROVED_USERS=user1@example.com,user2@example.com

# Restrict to specific email domains
WEBEX_APPROVED_DOMAINS=example.com

# Restrict to specific rooms
WEBEX_APPROVED_ROOMS=room_id_1,room_id_2

MCP Integration

Connect to MCP HTTP transport servers for extended tool capabilities:

MCP_ENABLED=true
MCP_REQUEST_TIMEOUT=30

# Single server
MCP_SERVERS=[{"name": "my-server", "url": "http://localhost:8000/mcp", "enabled": true}]

# Multiple servers with auth
MCP_SERVERS=[
  {"name": "tools-server", "url": "http://localhost:8000/mcp", "enabled": true},
  {"name": "secure-server", "url": "https://api.example.com/mcp", "auth_token": "your-token", "enabled": true}
]

Sentry Error Tracking (Optional)

Enable error tracking and performance monitoring with Sentry:

# Install with Sentry support
uv sync --extra sentry

Configure Sentry via environment variables:

Variable Description Default
SENTRY_DSN Sentry DSN (enables Sentry when set) -
SENTRY_TRACES_SAMPLE_RATE Trace sampling rate (0.0-1.0) 1.0
SENTRY_SEND_DEFAULT_PII Include PII in events true
SENTRY_ENVIRONMENT Environment name (e.g., production) -
SENTRY_RELEASE Release/version identifier Package version
SENTRY_PROFILE_SESSION_SAMPLE_RATE Profile session sampling rate 1.0
SENTRY_PROFILE_LIFECYCLE Profile lifecycle mode trace
SENTRY_ENABLE_LOGS Enable logging integration true

Example configuration:

# Enable Sentry error tracking
SENTRY_DSN=https://your-key@o12345.ingest.us.sentry.io/6789
SENTRY_ENVIRONMENT=production

Usage

  1. Start a conversation: Mention the bot in a Webex space:

    @BotName What is AI?
    
  2. Follow-up in thread: Reply in the same thread for context-aware responses:

    @BotName Tell me more.
    
  3. The bot maintains context within the thread, so you can have natural conversations.

Development

Code Quality

# Lint code
uv run ruff check src/

# Format code
uv run ruff format src/

# Fix linting issues
uv run ruff check src/ --fix

Adding New Features

  • Commands: Add new commands in src/commands/
  • MCP Tools: Connect to MCP servers via configuration
  • LLM Providers: Configure via LLM_MODEL using LiteLLM syntax

Dependencies

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webex_bot_ai-0.1.0.tar.gz (155.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

webex_bot_ai-0.1.0-py3-none-any.whl (30.5 kB view details)

Uploaded Python 3

File details

Details for the file webex_bot_ai-0.1.0.tar.gz.

File metadata

  • Download URL: webex_bot_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 155.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for webex_bot_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f7aa4f10e30289813137bd94422ed2660387b0ebf08d1fc256cb64d99a18098a
MD5 7677905707ea789851eeceaf1b709b8e
BLAKE2b-256 4b1786f621ebc65d3b43eadc5b6ffc84f158982a3df02c152781d47622af741d

See more details on using hashes here.

Provenance

The following attestation bundles were made for webex_bot_ai-0.1.0.tar.gz:

Publisher: publish.yml on mhajder/webex-bot-ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file webex_bot_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: webex_bot_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 30.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for webex_bot_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 871568eaae54ccf688b775ce81b881bec1201f5e39515bb9a2a716b83378bb8b
MD5 ded5763e6b65d9cb9596496293c9b57e
BLAKE2b-256 a47f014d4093ade5830614dcbb936bc921ceb52075e8220ec5a920cc7f84ac1e

See more details on using hashes here.

Provenance

The following attestation bundles were made for webex_bot_ai-0.1.0-py3-none-any.whl:

Publisher: publish.yml on mhajder/webex-bot-ai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page