Render Telegram-bot dialogs declared in YAML, with inline builders for keyboards, media and conditional logic.
Project description
aiogram-dialog-yaml
Author your Telegram-bot dialogs in YAML. Render them into real aiogram payloads. Ship copy changes without touching Python.
welcome:
send_photo:
photo: "@hero_image"
caption: "Hi {user_name}! Welcome to {service_name}."
parse_mode: html
reply_markup:
func: static_inline_keyboard
data:
buttons:
- { line: 1, text: "Get started", callback: "onboarding_1" }
- { line: 2, text: "Help", url: "@support_link" }
section = await provider.get("welcome", user_name="Alice")
await bot.send_photo(chat_id=chat_id, **section["send_photo"])
That's it. The keyboard is a real InlineKeyboardMarkup, the caption is
already interpolated, the URL came from a shared constants: block. No
glue code in between.
Why this exists
Telegram bots have a peculiar shape:
- A lot of strings — onboarding, errors, paywalls, help — and every one of them is glued to a button layout, a parse mode, sometimes a photo or a video.
- Copy churns weekly. Layout churns monthly. Python deploys cost minutes.
- Pure i18n libraries handle strings but throw away the structure. Pure Python builders handle structure but turn copy into a Python file.
aiogram-dialog-yaml keeps the structure (which buttons, which row,
which photo vs which message) right next to the copy (the text itself),
in YAML. When the structure needs logic — show a different button if the
user is a VIP, choose photo vs plain message based on a flag — you call
a builder function from inside the YAML itself, with func: / data:.
It is the smallest possible DSL that lets a non-engineer ship a bot copy change as a single-line PR, and lets an engineer drop down to arbitrary Python when copy alone is not enough.
The library was extracted and refactored from a production bot
(vpn_telegram_bot), where it had been driving roughly 800 lines of
dialogs across two layered configs in production for ~2 years.
Features
- Zero magic. A section is a dict; you pass it to
bot.send_*(**section[method]). @references— reuse strings, keyboards, fragments anywhere.{placeholders}—str.formatagainst constants + per-call params.- Inline builders —
func: static_inline_keyboardresolves to a realInlineKeyboardMarkupbefore the dispatcher sees it. - Async-aware — async builders work the same as sync ones.
- Layered configs —
base.yaml+brand_x.yaml, later overrides earlier. - Built-in rate-limited queue — opt-in
MessageQueuekeeps you under Telegram's 30 msg/s global cap with priorities and graceful drain. - Drop-in applier —
DialogApplierresolves a section and dispatches every nestedsend_*for you. Bypassable per call (immediately=True) or globally (don't pass a queue). - Tiny core, optional aiogram extra —
PyYAMLis the only required dep. - Friendly authoring — unknown
{placeholder}is left intact instead of raising, so YAML stays editable mid-flight. - Type-hinted — ships
py.typed.
Install
pip install aiogram-dialog-yaml # core (PyYAML only)
pip install aiogram-dialog-yaml[aiogram] # + Telegram keyboard / media builders
Until the package is on PyPI you can install from a checkout:
pip install -e ./aiogram-dialog-yaml[aiogram]
30-second tour
# dialogs.yaml
constants:
service_name: "Acme Bot"
support_link: "https://t.me/acme_support"
greeting_text: |
👋 <b>Hi {user_name}!</b>
Welcome to {service_name}. Need help? {support_link}
greeting_keyboard:
func: static_inline_keyboard
data:
buttons:
- { line: 1, text: "Get started", callback: "start" }
- { line: 2, text: "Help", url: "@support_link" }
dialogs:
greeting:
send_message:
text: "@greeting_text"
parse_mode: html
disable_web_page_preview: true
reply_markup: "@greeting_keyboard"
Minimal — resolve and send by hand:
import asyncio
from aiogram import Bot
from aiogram_dialog_yaml import DialogProvider, FunctionRegistry, default_functions
from aiogram_dialog_yaml.functions import aiogram_functions
registry = FunctionRegistry(default_functions())
registry.register_many(aiogram_functions())
provider = DialogProvider("dialogs.yaml", functions=registry)
async def main() -> None:
bot = Bot(token="...")
section = await provider.get("greeting", user_name="Alice")
# section == {"send_message": {"text": "...", "reply_markup": <InlineKeyboardMarkup>, ...}}
await bot.send_message(chat_id=12345, **section["send_message"])
asyncio.run(main())
Production-grade — same thing through the rate-limited queue (recommended for anything that broadcasts):
import asyncio
from aiogram import Bot
from aiogram_dialog_yaml import DialogProvider, FunctionRegistry, default_functions
from aiogram_dialog_yaml.delivery import DialogApplier, MessageQueue
from aiogram_dialog_yaml.functions import aiogram_functions
registry = FunctionRegistry(default_functions())
registry.register_many(aiogram_functions())
provider = DialogProvider("dialogs.yaml", functions=registry)
async def main() -> None:
bot = Bot(token="...")
queue = MessageQueue(bot, rate_limit=30) # Telegram's global per-bot cap
applier = DialogApplier(provider, bot, queue=queue)
await queue.start()
try:
await applier.apply(chat_id=12345, section_key="greeting", user_name="Alice")
# …handle the rest of your update loop here
finally:
await queue.stop(drain=True)
asyncio.run(main())
That's the whole API surface for the common case.
Message queue & Telegram rate limits
Telegram enforces several hard limits on every bot. Hit any of them and
the API answers with 429 Too Many Requests and a Retry-After window —
aiogram surfaces this as TelegramRetryAfter. The two that bite first:
| limit | value |
|---|---|
| Global outbound rate for a single bot | ~30 messages / second |
| Messages to the same group chat | ~20 messages / minute |
| Messages to the same private chat | ~1 message / second (soft, bursts OK) |
The library ships an opt-in queue that enforces the global cap and lets you assign priorities, so a critical alert can cut in front of a slow broadcast. Import it explicitly — nothing in the core depends on it:
from aiogram_dialog_yaml.delivery import DialogApplier, MessageQueue
queue = MessageQueue(bot, rate_limit=30) # change to fit your account
await queue.start() # spawns the worker task
applier = DialogApplier(provider, bot, queue=queue)
# Normal send — goes through the queue, drained at ≤30 msg/s.
await applier.apply(chat_id=123, section_key="welcome", user_name="Alice")
# Urgent: priority 0 jumps ahead of every priority-1 task.
await applier.apply(chat_id=123, section_key="urgent_alert", priority=0)
# Per-call escape hatch: bypass the queue for this one send.
await applier.apply(chat_id=123, section_key="status", immediately=True)
await queue.stop(drain=True) # waits for the backlog to flush
Inside the queue:
- Items are stored in an
asyncio.PriorityQueuekeyed by(priority, seq). Lower priority numbers drain first; equal priorities preserve FIFO via an internal monotonic counter. - A sliding 1-second window of dispatch timestamps caps throughput at
rate_limit. When you exceed it the workerasyncio.sleeps for the exact catch-up delta — no busy-waiting. - The worker never dies on send errors. Failures land in
on_error(task, exc)(default: log at WARNING) so a single broken message can't stop the queue. stop(drain=True, timeout=...)waits for the backlog to flush before cancelling.stop(drain=False)discards anything still queued.
Three escape hatches — pick the smallest that fits
| use case | what you instantiate |
|---|---|
| Just resolve sections, send by hand | DialogProvider |
| Resolve + dispatch, no rate limit / queue | DialogApplier(provider, bot) |
| Resolve + queue + rate limit + priorities | DialogApplier(provider, bot, queue=...) |
One-liner that builds both: DialogApplier.with_queue(provider, bot, rate_limit=30).
Adapter seam
The queue (and the applier when no queue is used) calls
params_adapter(method_name, params) -> params immediately before
bot.<method>(...). Use it to translate YAML primitives into
aiogram-native types without polluting your YAML:
from aiogram.types import InputPollOption, URLInputFile
def params_adapter(method: str, params: dict) -> dict:
if method == "send_video_note" and isinstance(params.get("video_note"), str):
params["video_note"] = URLInputFile(params["video_note"], "vn.mp4")
if method == "send_poll":
params["options"] = [InputPollOption(text=o) for o in params.get("options", [])]
return params
queue = MessageQueue(bot, rate_limit=30, params_adapter=params_adapter)
Error handling
async def on_send_error(task: MessageTask, exc: BaseException) -> None:
# task.method, task.chat_id, task.params, task.priority are all available here.
await bot.send_message(task.chat_id, f"⚠️ {task.method} failed: {exc}")
queue = MessageQueue(bot, rate_limit=30, on_error=on_send_error)
on_error can be sync or async. It is the only escape if you want a
custom retry policy — the queue itself does not retry.
Skipping the queue entirely
applier = DialogApplier(provider, bot) # no queue argument
await applier.apply(chat_id=123, section_key="welcome", user_name="Alice")
All sends go straight to the bot. You lose the rate limit and priorities but keep section resolution and the adapter seam.
A runnable demo bot
examples/full_bot/ is a real aiogram bot that
exercises every practical bot.send_* method — text, photo, video,
audio, document, animation, voice, video note, media group, location,
venue, contact, poll, quiz, dice, sticker, chat action — all driven by a
single dialogs.yaml. No hand-written
copy in Python. Drop your BotFather token into .env and run:
pip install -e ".[aiogram]" python-dotenv
cp examples/full_bot/.env.example examples/full_bot/.env # paste token
python examples/full_bot/bot.py
See examples/full_bot/README.md for the
method-mapping table and notes on Telegram quirks (send_video_note URL
re-uploading, send_voice codec, sticker limitations).
Concepts
YAML shape
A config file has two optional top-level keys:
constants: # reusable values — strings, fragments, keyboards
service_name: "Acme Bot"
support_link: "https://t.me/acme_support"
dialogs: # the sections you actually look up by name
greeting:
send_message:
text: "Hi {user_name} from {service_name}"
Inside any value, anywhere in the tree:
| Form | Meaning |
|---|---|
"@name" |
replaced by the constant or param named name (recursive) |
"... {name} ..." |
str.format-substituted against constants + params |
{func: x, data:} |
replaced by the result of the registered builder x |
Unknown {placeholder} is left intact — so you can author copy that
references a variable you haven't wired up yet without crashing.
Layering
provider = DialogProvider(["base.yaml", "brand_acme.yaml", "tenant_42.yaml"])
Files are read in order. Later files override earlier ones at the
section level (not the deep-merge level). This mirrors the
base + actual pattern from the original bot.
Builders
A node shaped like
field:
func: <registered_name>
data: <anything>
is replaced by the return value of registry.get(<registered_name>)(data).
data is fully resolved (@ and {}) before the builder runs, and the
builder may be sync or async — aiogram-dialog-yaml will await as needed.
Built-in framework-agnostic builders (always available):
| name | input | returns |
|---|---|---|
identity |
anything | unchanged |
if_else |
{ if, then, else } |
then or else |
length |
sized container | int |
merge |
{ param1, param2 } (both dicts) |
merged dict |
rename_key |
{ dict, old_key, new_key } |
dict with key renamed |
ru_date |
datetime |
"dd.mm.YYYY" string |
concatenate_texts |
{ texts: [...] } |
joined string |
bool |
anything | bool(value) |
Built-in aiogram builders (require the [aiogram] extra):
| name | output | what it does |
|---|---|---|
static_inline_keyboard |
InlineKeyboardMarkup |
declarative buttons grouped by line |
to_web_app_info |
WebAppInfo |
wrap a URL |
to_input_media_list |
list[InputMedia*] |
build payloads for send_media_group |
Custom builders
The whole point: you bring your own.
from aiogram_dialog_yaml import DialogProvider, FunctionRegistry, default_functions
async def render_configs(data):
configs = data["configs"]
return "\n".join(f"{c.name} — exp {c.expires.date()}" for c in configs)
registry = FunctionRegistry(default_functions())
registry.register("render_configs", render_configs)
provider = DialogProvider("dialogs.yaml", functions=registry)
dialogs:
status:
send_message:
text:
func: render_configs
data:
configs: "@user_configs"
section = await provider.get("status", user_configs=current_user.configs)
Or pass them straight in with extra_functions=:
provider = DialogProvider(
"dialogs.yaml",
extra_functions={"render_configs": render_configs},
)
Resolution pipeline
For each get(section_key, **params):
constants + params ─┐
│
1. expand "@name" │
2. expand "{placeholders}" │ pass 1
3. invoke func: nodes │
│
1. expand "@name" │
2. expand "{placeholders}" │ pass 2
3. invoke func: nodes │
│
section payload ◄────────┘
The two passes are there so a builder can return a string containing
fresh {placeholders} that still get filled in.
API reference
from aiogram_dialog_yaml import (
DialogProvider,
FunctionRegistry,
SectionNotFoundError,
default_functions,
)
from aiogram_dialog_yaml.functions import aiogram_functions
DialogProvider
DialogProvider(
config_paths: str | Path | Iterable[str | Path],
*,
functions: FunctionRegistry | None = None,
extra_functions: Mapping[str, Callable] | None = None,
)
| method | returns | notes |
|---|---|---|
await provider.get(key, **params) |
dict | list[dict] |
resolve a named section |
await provider.get_from_string(yaml, **) |
dict | list[dict] |
resolve an inline YAML snippet |
await provider.get_as_string(key, **) |
str |
YAML dump with placeholders applied (debug aid) |
provider.has(key) |
bool |
|
provider.constants |
dict |
deep copy |
provider.functions |
FunctionRegistry |
mutable |
FunctionRegistry
registry = FunctionRegistry(default_functions())
registry.register("name", fn)
registry.register_many({"a": fn_a, "b": fn_b})
registry.get("name") # raises KeyError if missing
"name" in registry # bool
registry.as_dict() # shallow copy
Exceptions
| exception | subclass of | raised when |
|---|---|---|
DialogYamlError |
Exception |
base class |
SectionNotFoundError |
DialogYamlError, KeyError |
unknown section key |
InvalidSectionStringError |
DialogYamlError, ValueError |
get_from_string() on broken YAML |
Delivery layer (optional)
from aiogram_dialog_yaml.delivery import (
DialogApplier,
MessageQueue,
MessageTask,
default_params_adapter,
DEFAULT_RATE_LIMIT, # 30
)
MessageTask
@dataclass(frozen=True)
class MessageTask:
chat_id: int | str
method: str # aiogram bot method name, e.g. "send_message"
params: dict[str, Any] = {}
priority: int = 1 # lower drains first
MessageQueue
MessageQueue(
bot,
*,
rate_limit: int = 30,
params_adapter: Callable[[str, dict], dict] | None = None,
on_error: Callable[[MessageTask, BaseException], None | Awaitable[None]] | None = None,
)
| method | notes |
|---|---|
await queue.start() |
idempotent; spawns the worker task |
await queue.add(task) |
enqueue; raises RuntimeError after stop() |
await queue.join() |
block until backlog is empty |
await queue.stop(drain=True, timeout=N) |
graceful drain + cancel; drain=False cancels immediately |
queue.qsize() / queue.rate_limit |
introspection |
DialogApplier
DialogApplier(
provider: DialogProvider,
bot,
*,
queue: MessageQueue | None = None,
params_adapter: Callable[[str, dict], dict] | None = None,
)
DialogApplier.with_queue(provider, bot, *, rate_limit=30, params_adapter=None)
| method | notes |
|---|---|
await applier.apply(chat_id, section_key, **params) |
resolve + dispatch; supports priority= / immediately= |
await applier.apply_from_string(chat_id, yaml, **) |
same, but the section is an inline YAML string |
applier.queue / applier.provider |
introspection |
Anatomy of a section payload
Whatever you return for a dialog section is what your dispatcher hands to aiogram. Single send:
{"send_message": {"text": "...", "parse_mode": "html", "reply_markup": <InlineKeyboardMarkup>}}
Multiple sends, in order:
[
{"send_chat_action": {"action": "typing"}},
{"send_photo": {"photo": "...", "caption": "...", "reply_markup": <...>}},
]
Most of the time you don't write the dispatcher yourself — the built-in
DialogApplier walks the section,
turns each key into a MessageTask, and feeds it to the queue (or sends
directly if no queue is wired). See examples/full_bot/bot.py
for the production-style wiring, including a params_adapter for
URLInputFile / InputPollOption and an on_error that reports broken
sends back into chat.
Migrating from the original vpn_telegram_bot implementation
| before | after |
|---|---|
lib.messaging.dialog_section_provider.DialogSectionProvider(base, actual) |
DialogProvider([base, actual]) |
lib.messaging.functions.functions |
default_functions() + aiogram_functions() |
@singleton decorator |
instantiate DialogProvider once yourself |
personal_website_link builder (imported WebsiteDomainService) |
register as a custom builder in your own app |
KeyError on unknown {placeholder} |
placeholder left intact (raise was unhelpful during authoring) |
Semantics preserved: @var, {format}, two-pass resolution, recursive
function application, YAML schema, async/sync function dispatch.
Testing
pip install -e ".[dev]"
pytest
19 cases across two files:
tests/test_provider.py(8) — constant resolution, bothif_elsebranches, list sections, missing sections, custom async builders viaget_from_string, layered file overrides, lenient unknown-placeholder behaviour.tests/test_delivery.py(11) — priority order, FIFO tiebreak inside equal priorities, customparams_adapter,on_errorinvoked without killing the worker, rate-limit throttling (≥0.9s on 10 messages at 5 msg/s),add()afterstop(), direct vs queue vsimmediately=Trueapplier paths, list-section expansion,DialogApplier.with_queue(...).
Project layout
aiogram-dialog-yaml/
├── pyproject.toml
├── README.md # you are here
├── src/
│ └── aiogram_dialog_yaml/
│ ├── __init__.py
│ ├── provider.py # DialogProvider, FunctionRegistry
│ ├── exceptions.py
│ ├── functions/
│ │ ├── core.py # framework-agnostic builders
│ │ └── aiogram.py # aiogram-typed builders (optional extra)
│ ├── delivery/ # optional: queue + applier
│ │ ├── task.py # MessageTask
│ │ ├── queue.py # MessageQueue (rate-limited, priority)
│ │ └── applier.py # DialogApplier
│ └── py.typed
├── examples/
│ ├── dialogs.yaml # small synthetic example
│ ├── usage.py # minimal one-shot demo
│ └── full_bot/ # full runnable aiogram bot
│ ├── bot.py
│ ├── dialogs.yaml
│ ├── .env.example
│ └── README.md
└── tests/
├── test_provider.py
└── test_delivery.py
License
MIT.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aiogram_dialog_yaml-0.1.0.tar.gz.
File metadata
- Download URL: aiogram_dialog_yaml-0.1.0.tar.gz
- Upload date:
- Size: 37.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d6b5922dbfcda48d59005cd0cbc88cc3b4ea9f52ea49cc560af4072607295aed
|
|
| MD5 |
447538d7ace3b007a5b40ab1abc43aa3
|
|
| BLAKE2b-256 |
10a0daa8da3eafe462c8f069a7893dbef9a30e96bbb32d66d02260e2a5a815fb
|
Provenance
The following attestation bundles were made for aiogram_dialog_yaml-0.1.0.tar.gz:
Publisher:
publish.yml on notwizzard/aiogram-dialog-yaml
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aiogram_dialog_yaml-0.1.0.tar.gz -
Subject digest:
d6b5922dbfcda48d59005cd0cbc88cc3b4ea9f52ea49cc560af4072607295aed - Sigstore transparency entry: 1548287105
- Sigstore integration time:
-
Permalink:
notwizzard/aiogram-dialog-yaml@aee5e595403e29f95f766fd302568b488d3ecf40 -
Branch / Tag:
refs/tags/0.1.0 - Owner: https://github.com/notwizzard
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@aee5e595403e29f95f766fd302568b488d3ecf40 -
Trigger Event:
release
-
Statement type:
File details
Details for the file aiogram_dialog_yaml-0.1.0-py3-none-any.whl.
File metadata
- Download URL: aiogram_dialog_yaml-0.1.0-py3-none-any.whl
- Upload date:
- Size: 23.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c5d975f292b8d765dde66a839d8b765e9d3a55fb891ef4603d252e43a03c9e39
|
|
| MD5 |
34889be89488372310e0fb48b1e2836a
|
|
| BLAKE2b-256 |
2706ed9c478354075f57f9902949d2f50a181e7672bc2253aef627aeffd19e12
|
Provenance
The following attestation bundles were made for aiogram_dialog_yaml-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on notwizzard/aiogram-dialog-yaml
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aiogram_dialog_yaml-0.1.0-py3-none-any.whl -
Subject digest:
c5d975f292b8d765dde66a839d8b765e9d3a55fb891ef4603d252e43a03c9e39 - Sigstore transparency entry: 1548287338
- Sigstore integration time:
-
Permalink:
notwizzard/aiogram-dialog-yaml@aee5e595403e29f95f766fd302568b488d3ecf40 -
Branch / Tag:
refs/tags/0.1.0 - Owner: https://github.com/notwizzard
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@aee5e595403e29f95f766fd302568b488d3ecf40 -
Trigger Event:
release
-
Statement type: