AI-driven workflow Language
Project description
AIL — AI Workflow Language
One rule:
@= AI executes. No@= your code runs.
result = @ask("summarize: {text}") # AI executes
words = len(text.split()) # deterministic Python
flag = @judge("is this complete: {result}") # AI returns bool
The problem
When you want an AI to execute a product's logic, you have two bad options:
- Natural language — flexible, but takes paragraphs to describe, and AI fills gaps however it wants
- Pseudocode — tighter, but can't express the full execution logic: no retries, no branching on AI judgment, no parallel calls, no fallbacks
The result: you spend more time writing prompts than building, and the AI still misses the intent.
The solution
AIL is a new language designed for this exact problem. One rule, one symbol:
@ → AI executes (semantic, non-deterministic)
→ code runs (deterministic, Python-compatible)
Write your logic the way you think it. @ask, @judge, @validate — the full execution intent in a fraction of the words. AIL comes with a guide written for AI, so you can paste it into any model and it immediately understands how to read and generate AIL.
aif is the Python framework that runs AIL. Plug in your own agent, decorate your tools, and your agent executes the workflow. Pure Python, no new concepts — if you know Python, you're ready in minutes.
Everything else is standard Python syntax — loops, functions, types, error handling. No new paradigm to learn.
Quick look
Retry until quality passes:
retry max=3:
report = @ask(generate_report)
@validate(report, "must include conclusion, data, and sources")
Loop until AI says done:
loop max=10 until @judge("no unresolved issues in: {output}"):
output = @ask(solve_issues)
Parallel AI calls:
tech, biz, ux = parallel:
tech = @ask("analyze technically: {content}")
biz = @ask("analyze commercially: {content}")
ux = @ask("analyze from user perspective: {content}")
summary = @ask("synthesize three perspectives: {tech} {biz} {ux}")
Structured extraction:
type QueryInfo:
intent: str
keywords: list[str]
multi_step: bool
info = @extract(analysis, type=QueryInfo)
Multi-turn conversation:
with context(system="you are a planning expert") as ctx:
issues = @ask(analyze) # AI sees prior turns
output = @ask(solve) # AI sees issues
ctx.reset() # clear history
@ operations at a glance
| Operation | Returns | What it does |
|---|---|---|
@ask(prompt) |
str |
execute a task |
@judge("condition") |
bool |
yes/no judgment |
@pick("instruction", options=[...]) |
option type | select from options |
@plan("goal") |
list[str] |
decompose goal into steps |
@extract(text, type=T) |
T |
extract structured data |
@eval(content, "criterion") |
float |
score 0–1 |
@validate(content, "condition") |
— / raises | assert or retry |
@act("instruction") |
str |
AI autonomously picks and calls a tool |
@ask_user("prompt") |
str |
ask the human, block for input |
@confirm("description") |
— / raises | request human confirmation |
@show("message") |
— | display to human, non-blocking |
Reliability built in
try:
retry max=3:
timeout 20s:
result = @ask(generate)
@validate(result, "must be complete")
fallback:
result = @ask(basic_fallback)
retry, timeout, try/fallback compose freely.
Tools, skills, plugins
use tool vector_search(query: str, top_k: int) -> list[Document] # deterministic function
use skill rag_agent(query: str) -> (str, list[str]) # sub-agent with AI ops
use plugin database as db # stateful external service
docs = vector_search("deep learning", top_k=10) # called like a function
answer, citations = rag_agent(query=user_input)
users = db.query("SELECT * FROM users")
Memory
memory.save(result, key="last_answer", tags=["history"])
pref = memory.get("user_preference")
related = memory.search("RAG discussion", top_k=3)
Full example
A complete RAG agent is in examples/ail/rag_agent.ail. More examples in examples/.
Docs
| Language spec (for AI) | docs/ail/for-AI-v1.0.md |
| User guide — English | docs/ail/for-humans-en-v1.0.md |
| User guide — 中文 | docs/ail/for-humans-v1.0.md |
| Python SDK spec | docs/aif/product-spec-v1.0.md |
| Python SDK | aif/ |
| AIL examples | examples/ail/ |
| aif examples | examples/aif/ |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aill-1.0.0.tar.gz.
File metadata
- Download URL: aill-1.0.0.tar.gz
- Upload date:
- Size: 18.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6aeb2de859894853747fa4ffdf62201255828315e8355758e135559780ca6638
|
|
| MD5 |
4c12b69312442a5b26305be0f64c2f2e
|
|
| BLAKE2b-256 |
99a73835eb37fb6d78ffba111f5f70bb9d62f4ddc37d6c3d32ab311a7590eec9
|
File details
Details for the file aill-1.0.0-py3-none-any.whl.
File metadata
- Download URL: aill-1.0.0-py3-none-any.whl
- Upload date:
- Size: 16.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
276e8dd1a3ef665c23d91b55233a03ce95e2ebc67b45717ba9b3d8e8d1971af2
|
|
| MD5 |
ff83c829d18e313adb71dff9e6111ff0
|
|
| BLAKE2b-256 |
6dd0f3e9167039aa670197577c9a7bc984a013f421c5873da66767f827322266
|