Skip to main content

LLM-powered Telegram bot (OpenAI + Anthropic)

Project description

TeLLMgramBot

The basic goal of this project is to create a bridge between a Telegram Bot and a Large Language Model (LLM), supporting both OpenAI's GPT models and Anthropic's Claude models.

  • To use this library, you must have a Telegram account with a user name, not just a phone number. If you don't have one, create one online.
  • If added to a Telegram group, the bot must be administrator in order to respond to a user calling out its name, initials, or nickname.

Telegram Bot + LLM Encapsulation

  • The Telegram interface handles special commands, especially on some basic "chatty" prompts and responses that don't require LLM, like "Hello".
  • The more dynamic conversation gets handed off to the LLM to manage prompts and responses, and Telegram acts as the interaction broker.
  • Pass the URL in [square brackets] and mention how the bot should interpret it.
    • Example: "What do you think of this article? [https://some_site/article]"
    • This uses a separate model (configurable via url_model) to support more URL content with its higher token limit.
  • Ask questions about message history across all your chats using natural language via LLM tool calling.
    • Example: "Who said thanks for the breakdown?" or "What did George say about the project?" or "What did we discuss about DMs?" or simply "Show me the last few messages" without specifying a search query.
    • The bot automatically searches your private chat and all shared groups (where both you and the bot are active), attributing messages to speakers and returning results from the full history (beyond the limited in-memory token budget). Messages from other bots in group chats are also indexed for search, enabling discovery of bot summaries and responses. All search filters are optional - you can ask for recent messages broadly without specifying content or speakers. Chat title queries support colloquial terms like "DMs" (falls back to all accessible chats if no exact match is found). Results are ordered most-recent-first; configure search_limit to control how many results to return (default: 30).
  • Tokens are used to measure the length of all conversation messages between the Telegram bot assistant and the user. This is useful to:
    • Ensure the length does not go over the model limit. If it does, prune oldest messages to fit within the limit.
    • Remember past conversations when restarting: loads the user's full history across all chats (private and groups) plus all other participants' messages in the current chat, up to 50% of the token budget. In private chats, shared group context (messages from groups where both user and bot are active) fills the remaining budget, enabling the bot to reference group conversations from a private context. This eliminates amnesia when users switch between contexts.
  • Users can manage privacy via two commands:
    • /forget - In private chats, clears the full conversation (including bot replies) and resets all of your active sessions (any sessions where your context was merged). In group chats, removes your messages across all chat types and cleans up paired bot replies; other participants' messages and sessions remain.
    • /private - Toggles per-user private mode (private chats only). When ON, messages are excluded from group conversation contexts, enabling selective privacy even in shared groups.

Why Telegram?

Using Telegram as the interface not only solves "exposing" the interface, but gives you boatloads of interactivity over a standard Command Line interface, or trying to create a website with input boxes and submit buttons to try to handle everything:

  1. Telegram already lets you paste in verbose, multiline messages.
  2. Telegram already lets you paste in pictures, videos, links, etc.
  3. Telegram already lets you react with emojis, stickers, etc.
  4. Telegram message reactions (👀) provide a lightweight read receipt without breaking conversation flow.

Supported LLM Providers

TeLLMgramBot selects the LLM provider automatically based on the model name:

Model prefix Provider Example models
gpt- OpenAI gpt-4o, gpt-4o-mini, gpt-5-mini
claude- Anthropic claude-sonnet-4-6, claude-haiku-4-5

Simply set chat_model (and optionally url_model) in your config.yaml to any supported model and supply the corresponding API key - no other changes needed.

Directories

When initializing TeLLMgramBot, the following directories get created:

  • configs - Contains bot configuration files.
    • config.yaml (can be a different name)
      • This file sets main bot parameters like naming and the LLM models to use.
      • chat_model - the model used for normal conversation (e.g. gpt-5-mini or claude-sonnet-4-6).
      • url_model - the model used to read and summarize URL content, can differ from chat_model.
      • An empty token_limit will use the maximum tokens supported by the chat_model.
      • search_limit - optional; maximum number of message history search results returned (default: 30). When the user asks "show me recent messages" or searches message history via natural language, this limits the number of results. Omit to use the default.
    • models.yaml
      • Contains token size parameters for all supported models.
      • On first run, GPT and Claude model families are pre-populated. Additional models can be added manually.
  • prompts - Contains prompt files for how the bot interacts with any user.
    • test_personality.prmpt (can be a different name)
      • A sample prompt file defining the bot's personality: generic, helpful, and multi-provider-aware.
      • The prompt emphasizes the bot's ability to fetch and analyze URLs passed in square brackets [].
      • The user can create more prompt files as needed for different personalities.
      • At initialization, the bot automatically appends framework-owned behavioral guidance (system appendix) to teach the LLM how cross-chat memory works (cross-pollination, private mode, shared group context) without requiring persona authors to include this guidance. The appendix includes two framework-managed injectable values: the current UTC datetime and the current user identity, refreshed on every message so the LLM always has accurate context.
    • url_analysis.prmpt
      • Prompt template used to analyze URL content passed in brackets [].
  • logs
    • Contains log files (one per bot instance startup) with timestamps (e.g., tellmgrambot_2026-03-29_10-30-45.log) to investigate issues.
    • Structured logs with anonymized Telegram IDs to protect user privacy. Console output shows only TeLLMgramBot-related messages.
    • Bot keeps the 10 most recent log files, automatically cleaning up older ones.
    • Pass -v or --verbose on startup to enable DEBUG-level logging; default is INFO level.
  • data
    • Contains conversations.db - a SQLite database storing all conversations between the bot and users across all chats.
    • When a user messages in any chat, their full history is available for context: private messages appear in group contexts, group messages appear in private contexts. This creates seamless cross-context awareness. The bot dynamically refreshes each user's context during a session to pick up messages sent between chats. In private chats, the bot also loads shared group context (messages from groups where both user and bot are active) to provide group awareness.
    • Users can manage their context via /forget (private chat: clears full conversation; group chat: removes only your messages) or /private (toggles per-user privacy for group contexts).

Environment Variables

TeLLMgramBot also creates or utilizes the following environment variables that can be pre-loaded, especially in containerized environments like Home Assistant with different persistent storage locations:

  1. TELLMGRAMBOT_CONFIGS_PATH - Directory containing config.yaml and models.yaml
  2. TELLMGRAMBOT_PROMPTS_PATH - Directory containing prompt files
  3. TELLMGRAMBOT_LOGS_PATH - Directory for log files (one log file created per bot instance startup)
  4. TELLMGRAMBOT_DATA_PATH - Directory containing conversations.db (e.g. /data). Defaults to data/ in the execution directory.

If none are defined, all paths default to subdirectories of the execution directory (the directory containing the entry-point script).

API Keys

TeLLMgramBot supports the following API keys. Each can be supplied via environment variable or .key file:

  • OpenAI - required when using a gpt-* model. Missing: chat and URL analysis disabled.
  • Anthropic - required when using a claude-* model. Missing: chat and URL analysis disabled.
  • Telegram - always required; available through BotFather. Missing: bot will not start.
  • VirusTotal - optional; performs safety checks on URLs. Missing: URL analysis disabled.

If a provider API key matching your configured model is missing, the bot will start but disable chat and URL analysis features. A startup summary shows which features are enabled.

Environment Variables

TeLLMgramBot uses the following environment variables for API keys:

  1. TELLMGRAMBOT_OPENAI_API_KEY (OpenAI models)
  2. TELLMGRAMBOT_ANTHROPIC_API_KEY (Anthropic models)
  3. TELLMGRAMBOT_TELEGRAM_API_KEY
  4. TELLMGRAMBOT_VIRUSTOTAL_API_KEY

During spin-up time, a user can call out os.environ[env_var] to set those variables, like the following example:

my_keys = Some_Vault_Fetch_Function()

os.environ['TELLMGRAMBOT_OPENAI_API_KEY']     = my_keys['OpenAIKey']
os.environ['TELLMGRAMBOT_ANTHROPIC_API_KEY']  = my_keys['AnthropicKey']
os.environ['TELLMGRAMBOT_TELEGRAM_API_KEY']   = my_keys['BotFatherToken']
os.environ['TELLMGRAMBOT_VIRUSTOTAL_API_KEY'] = my_keys['VirusTotalToken']

This means the user can implement whatever key vault they want to fetch the keys at runtime, without needing files stored in the directory.

API Key Files

By default, API key files are created in the execution directory (or the directory specified by TELLMGRAMBOT_KEYS_PATH for legacy deployments):

  1. openai.key - OpenAI API key for GPT models
  2. anthropic.key - Anthropic API key for Claude models
  3. telegram.key - Telegram Bot API key
  4. virustotal.key - VirusTotal API key for URL safety checks

Each file with the associated API key will update its respective environment variable if not defined. Missing provider keys (OpenAI or Anthropic) will disable chat and URL analysis but allow the bot to start. Missing VirusTotal keys will disable URL analysis.

Commands and Interactions

Available Commands

  • /start - Begin a conversation or get a welcome message.
  • /stop - End the current session (messages persist but are hidden from context).
  • /nick <name> - Set your personal nickname (for bot use in group chats).
  • /forget - Clear all of your conversation history. In private chats, clears everything. In group chats, removes only your messages.
  • /private - Toggle private mode (private chats only). When enabled, your messages are excluded from group context loading, providing selective privacy in shared groups.
  • /wipe - Permanently delete all conversation data from the database (owner-only, irreversible).
  • /help - Display all available commands and usage information.

Group Chat Triggers

In group and supergroup chats, the bot automatically captures and indexes messages from other bots, making them available via message history searches and conversation context. For example, you can ask "What did Bot B say about the project?" in a private chat and the bot will search across all shared groups.

For non-bot messages, the bot responds when any of the following conditions are met:

  • You mention the bot by username (e.g., @botname)
  • You mention the bot by nickname (configured via config.yaml), unless the message explicitly @mentions another account
  • You mention the bot by initials (configured via config.yaml), unless the message explicitly @mentions another account
  • You directly reply to one of the bot's messages (Telegram reply-to feature), unless the message explicitly @mentions another account

When a reply-to-bot message explicitly @mentions another account (e.g., "@otherbot please help" or "@alice can you help?"), the bot politely defers with "Looks like that message is for @otherbot!" rather than generating an LLM response. Note: Telegram does not distinguish bots from regular users in @mentions, so deflection fires for any foreign @mention. Deflections intentionally skip the read receipt.

Read Receipt Acknowledgement (Group Chats Only)

In group and supergroup chats, when the bot is triggered (via any of the above methods) and does not defer due to a foreign @mention, it immediately responds with a 👀 reaction emoji on your message to confirm it is online, followed by a full LLM response. The reaction uses the Telegram message reaction API when available (👀 emoji), or falls back to a short "Got it!" text reply for older clients that don't support reactions.

Bot Setup

This library includes an example script test_local.py, which uses files from the folders configs and prompts for TeLLMgramBot to process.

  1. Ensure the previous sections are followed with the proper API keys and your Telegram bot set.
  2. Install this library via PIP (pip install TeLLMgramBot) and then import into your project.
  3. Instantiate the bot by passing in various configuration pieces needed below. Note the Telegram bot's full name and username auto-populate before startup.
    telegram_bot = TeLLMgramBot.TelegramBot(
        bot_owner      = <Bot owner's Telegram username>,
        bot_nickname   = <Bot nickname like 'Botty'>,
        bot_initials   = <Bot initials like 'FB'>,
        chat_model     = <Conversation model like 'gpt-4o-mini' or 'claude-sonnet-4-6'>,
        url_model      = <URL analysis model like 'gpt-4o' or 'claude-haiku-4-5'>,
        token_limit    = <Maximum token count set, by default chat_model max>,
        persona_temp   = <LLM factual to creative value [0-2], by default 1.0>,
        persona_prompt = <System prompt summarizing bot personality>
    )
    
  4. Disable group privacy mode in BotFather to enable full group message capture (required for foreign bot message indexing and cross-chat context):
    /setprivacy -> select your bot -> Disable
    
    With privacy mode on (the default), Telegram only delivers messages that mention the bot, are replies to it, or are commands - other group messages including those from other bots are not delivered.
  5. Turn on TeLLMgramBot by calling:
    telegram_bot.start_polling()
    
    Once you see TeLLMgramBot polling..., the bot is online in Telegram.
  6. Converse! Type /help for all available commands.

Resources

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tellmgrambot-3.10.0.tar.gz (56.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tellmgrambot-3.10.0-py3-none-any.whl (57.4 kB view details)

Uploaded Python 3

File details

Details for the file tellmgrambot-3.10.0.tar.gz.

File metadata

  • Download URL: tellmgrambot-3.10.0.tar.gz
  • Upload date:
  • Size: 56.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for tellmgrambot-3.10.0.tar.gz
Algorithm Hash digest
SHA256 b3e790419da2e43a2e8b3f892825d919b11ddeab32abf2c296d7336c676b4e41
MD5 84ff0f530a102d7f18f38c9727c6ee90
BLAKE2b-256 4ac6809c20f8ae5c7d771f90c70d6ce36ae57e0353587808dd923fddcb9bc6ee

See more details on using hashes here.

File details

Details for the file tellmgrambot-3.10.0-py3-none-any.whl.

File metadata

  • Download URL: tellmgrambot-3.10.0-py3-none-any.whl
  • Upload date:
  • Size: 57.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for tellmgrambot-3.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5df9e399ebaad190179294a1767bb63af416e4ed269b0efb801ec53bae25f2dd
MD5 bf9054c61fe1353b95b3e0a69cfe372a
BLAKE2b-256 1d0e3c6d773f2692bddf4d69c83ac61aab8b71326479b5deab820a40e8276fe9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page