Declarative LangGraph Builder powered by YAML
Project description
Yagra
Declarative LangGraph Builder powered by YAML
Yagra enables you to build LangGraph's StateGraph from YAML definitions, separating workflow logic from Python implementation. Define nodes, edges, and branching conditions in YAML files—swap configurations without touching code.
Designed for LLM agent developers, prompt engineers, and non-technical stakeholders who want to iterate on workflows quickly without diving into Python code every time.
Built with AI-Native principles: JSON Schema export and validation CLI enable coding agents (Claude Code, Codex, etc.) to generate and validate workflows automatically.
✨ Key Features
- Declarative Workflow Management: Define nodes, edges, and conditional branching in YAML
- Implementation-Configuration Separation: Connect YAML
handlerstrings to Python callables via Registry - Schema Validation: Catch configuration errors early with Pydantic-based validation
- Visual Workflow Editor: Launch Studio WebUI for visual editing, drag-and-drop node/edge management, and diff preview
- Template Library: Quick-start templates for common patterns (branching, loops, RAG)
- AI-Ready: JSON Schema export (
yagra schema) and structured validation for coding agents
📦 Installation
- Python 3.12+
pip install yagra
# With LLM handler utilities (optional)
pip install 'yagra[llm]'
LLM Handler Utilities (Beta)
Yagra provides handler utilities to reduce boilerplate code for LLM nodes:
from yagra.handlers import create_llm_handler
# Create a generic LLM handler
llm = create_llm_handler(retry=3, timeout=30)
# Register and use in workflow
registry = {"llm": llm}
app = Yagra.from_workflow("workflow.yaml", registry)
YAML Definition:
nodes:
- id: "chat"
handler: "llm"
params:
prompt_ref: "prompts/chat.yaml#system"
model:
provider: "openai"
name: "gpt-4"
kwargs:
temperature: 0.7
output_key: "response"
The handler automatically:
- Extracts and interpolates prompts
- Calls LLM via litellm (100+ providers)
- Handles retries and timeouts
- Returns structured output
See the full working example: examples/llm-basic/
Structured Output Handler (Beta)
Use create_structured_llm_handler() to get type-safe Pydantic model instances from LLM responses:
from pydantic import BaseModel
from yagra.handlers import create_structured_llm_handler
class PersonInfo(BaseModel):
name: str
age: int
handler = create_structured_llm_handler(schema=PersonInfo)
registry = {"structured_llm": handler}
app = Yagra.from_workflow("workflow.yaml", registry)
result = app.invoke({"text": "My name is Alice and I am 30."})
person: PersonInfo = result["person"] # Type-safe!
print(person.name, person.age) # Alice 30
The handler automatically:
- Enables JSON output mode (
response_format=json_object) - Injects JSON Schema into the system prompt
- Validates and parses the response with Pydantic
Dynamic schema (no Python code required): Define the schema directly in your workflow YAML using schema_yaml, and call create_structured_llm_handler() with no arguments:
# No Pydantic model needed in Python code
handler = create_structured_llm_handler()
registry = {"structured_llm": handler}
# workflow.yaml
nodes:
- id: "extract"
handler: "structured_llm"
params:
schema_yaml: |
name: str
age: int
hobbies: list[str]
prompt_ref: "prompts.yaml#extract"
model:
provider: "openai"
name: "gpt-4o"
output_key: "person"
Supported types in schema_yaml: str, int, float, bool, list[str], list[int], dict[str, str], str | None, etc.
See the full working example: examples/llm-structured/
Streaming Handler (Beta)
Stream LLM responses chunk by chunk:
from yagra.handlers import create_streaming_llm_handler
handler = create_streaming_llm_handler(retry=3, timeout=60)
registry = {"streaming_llm": handler}
yagra = Yagra.from_workflow("workflow.yaml", registry)
result = yagra.invoke({"query": "Tell me about Python async"})
# Incremental processing
for chunk in result["response"]:
print(chunk, end="", flush=True)
# Or buffered
full_text = "".join(result["response"])
Note: The
Generatoris single-use. Consume it once with eitherforor"".join(...).
See the full working example: examples/llm-streaming/
🚀 Quick Start
Option 1: From Template (Recommended)
Yagra provides ready-to-use templates for common workflow patterns.
# List available templates
yagra init --list
# Initialize from a template
yagra init --template branch --output my-workflow
# Validate the generated workflow
yagra validate --workflow my-workflow/workflow.yaml
Available templates:
- branch: Conditional branching pattern
- loop: Planner → Evaluator loop pattern
- rag: Retrieve → Rerank → Generate RAG pattern
Option 2: From Scratch
1. Define State and Handler Functions
from typing import TypedDict
from yagra import Yagra
class AgentState(TypedDict, total=False):
query: str
intent: str
answer: str
__next__: str # For conditional branching
def classify_intent(state: AgentState, params: dict) -> dict:
intent = "faq" if "料金" in state.get("query", "") else "general"
return {"intent": intent, "__next__": intent}
def answer_faq(state: AgentState, params: dict) -> dict:
prompt = params.get("prompt", {})
return {"answer": f"FAQ: {prompt.get('system', '')}"}
def answer_general(state: AgentState, params: dict) -> dict:
model = params.get("model", {})
return {"answer": f"GENERAL via {model.get('name', 'unknown')}"}
def finish(state: AgentState, params: dict) -> dict:
return {"answer": state.get("answer", "")}
2. Define Workflow YAML
workflows/support.yaml
version: "1.0"
start_at: "classifier"
end_at:
- "finish"
nodes:
- id: "classifier"
handler: "classify_intent"
- id: "faq_bot"
handler: "answer_faq"
params:
prompt_ref: "../prompts/support_prompts.yaml#faq"
- id: "general_bot"
handler: "answer_general"
params:
model:
provider: "openai"
name: "gpt-4.1-mini"
- id: "finish"
handler: "finish"
edges:
- source: "classifier"
target: "faq_bot"
condition: "faq"
- source: "classifier"
target: "general_bot"
condition: "general"
- source: "faq_bot"
target: "finish"
- source: "general_bot"
target: "finish"
3. Register Handlers and Run
registry = {
"classify_intent": classify_intent,
"answer_faq": answer_faq,
"answer_general": answer_general,
"finish": finish,
}
app = Yagra.from_workflow(
workflow_path="workflows/support.yaml",
registry=registry,
state_schema=AgentState,
)
result = app.invoke({"query": "料金を教えて"})
print(result["answer"])
🛠️ CLI Tools
Yagra provides CLI commands for workflow management:
yagra init
Initialize a workflow from a template.
yagra init --template branch --output my-workflow
yagra schema
Export JSON Schema for workflow YAML (useful for coding agents).
yagra schema --output workflow-schema.json
yagra validate
Validate a workflow YAML and report issues.
# Human-readable output
yagra validate --workflow workflows/support.yaml
# JSON output for agent consumption
yagra validate --workflow workflows/support.yaml --format json
yagra visualize
Generate a read-only visualization HTML.
yagra visualize --workflow workflows/support.yaml --output /tmp/workflow.html
yagra studio
Launch an interactive WebUI for visual editing, drag-and-drop node/edge management, and workflow persistence.
# Launch with workflow selector (recommended)
yagra studio --port 8787
# Launch with a specific workflow
yagra studio --workflow workflows/support.yaml --port 8787
Open http://127.0.0.1:8787/ in your browser.
Studio Features:
- Handler Type Selector: Node Properties panel provides a type selector (
llm/structured_llm/streaming_llm/custom)- Predefined types auto-populate the handler name — no manual typing required
customtype enables free-text input for user-defined handlers
- Handler-Aware Forms: Form sections adapt automatically to the selected handler type
structured_llm→ Schema Settings section (editschema_yamlas YAML)streaming_llm→ Streaming Settings section (stream: falsetoggle)custom→ LLM-specific sections hidden automatically
- Visual Editing: Edit prompts, models, and conditions via forms
- Drag & Drop: Add nodes, connect edges, adjust layout visually
- Diff Preview: Review changes before saving
- Backup & Rollback: Automatic backups with rollback support
- Validation: Real-time validation with detailed error messages
📚 Documentation
Full documentation is available at shogo-hs.github.io/Yagra
- User Guide: Installation, YAML syntax, CLI tools
- API Reference: Python API documentation
- Examples: Practical use cases
You can also build documentation locally:
uv run sphinx-build -b html docs/sphinx/source docs/sphinx/_build/html
🎯 Use Cases
- Prototype LLM agent flows and iterate rapidly by swapping YAML files
- Enable non-engineers to adjust workflows (prompts, models, branching) without code changes
- Integrate with coding agents for automated workflow generation and validation
- Reduce boilerplate code when building LangGraph applications with complex control flow
🤝 Contributing
Contributions are welcome! Please see CONTRIBUTING.md for development setup, coding standards, and guidelines.
📄 License
MIT License - see LICENSE for details.
📝 Changelog
See CHANGELOG.md for release history.
Built with ❤️ for the LangGraph community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file yagra-0.5.5.tar.gz.
File metadata
- Download URL: yagra-0.5.5.tar.gz
- Upload date:
- Size: 1.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
017060797eade543a1e48c447bf2dcc3cc4f3b99f7fcdc4c024129cf0a62d61c
|
|
| MD5 |
741889d41dcfe47e2b6fa4e54b75c2a4
|
|
| BLAKE2b-256 |
29c618d0e0f043e6f50a95d9df8c3b25659e43e4964e60ab2ec646351e0cafb6
|
Provenance
The following attestation bundles were made for yagra-0.5.5.tar.gz:
Publisher:
publish.yml on shogo-hs/Yagra
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yagra-0.5.5.tar.gz -
Subject digest:
017060797eade543a1e48c447bf2dcc3cc4f3b99f7fcdc4c024129cf0a62d61c - Sigstore transparency entry: 956944892
- Sigstore integration time:
-
Permalink:
shogo-hs/Yagra@8a0189951ee0b977947e5620571a63b9239fb349 -
Branch / Tag:
refs/tags/v0.5.5 - Owner: https://github.com/shogo-hs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8a0189951ee0b977947e5620571a63b9239fb349 -
Trigger Event:
push
-
Statement type:
File details
Details for the file yagra-0.5.5-py3-none-any.whl.
File metadata
- Download URL: yagra-0.5.5-py3-none-any.whl
- Upload date:
- Size: 1.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fedec12f72beafad4ed397c211c7eec402e638c89fd0b34dcb92e88ae806d874
|
|
| MD5 |
dd7efdcd76c99cdced5c693c02fed487
|
|
| BLAKE2b-256 |
5364aa77394ed09ed77cc647b99157539a2f2aeab5992edc6e50db7ee40ca8e3
|
Provenance
The following attestation bundles were made for yagra-0.5.5-py3-none-any.whl:
Publisher:
publish.yml on shogo-hs/Yagra
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yagra-0.5.5-py3-none-any.whl -
Subject digest:
fedec12f72beafad4ed397c211c7eec402e638c89fd0b34dcb92e88ae806d874 - Sigstore transparency entry: 956944925
- Sigstore integration time:
-
Permalink:
shogo-hs/Yagra@8a0189951ee0b977947e5620571a63b9239fb349 -
Branch / Tag:
refs/tags/v0.5.5 - Owner: https://github.com/shogo-hs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8a0189951ee0b977947e5620571a63b9239fb349 -
Trigger Event:
push
-
Statement type: