Skip to main content

Declarative LangGraph Builder powered by YAML

Project description

Yagra

Yagra logo

CI PyPI Python License Downloads

Declarative LangGraph Builder powered by YAML

Yagra enables you to build LangGraph's StateGraph from YAML definitions, separating workflow logic from Python implementation. Define nodes, edges, and branching conditions in YAML files—swap configurations without touching code.

Designed for LLM agent developers, prompt engineers, and non-technical stakeholders who want to iterate on workflows quickly without diving into Python code every time.

Built with AI-Native principles: JSON Schema export and validation CLI enable coding agents (Claude Code, Codex, etc.) to generate and validate workflows automatically.

✨ Key Features

  • Declarative Workflow Management: Define nodes, edges, and conditional branching in YAML
  • Implementation-Configuration Separation: Connect YAML handler strings to Python callables via Registry
  • Schema Validation: Catch configuration errors early with Pydantic-based validation
  • Custom State Schema: Pass any TypedDict (including MessagesState) via state_schema — full LangGraph reducer support
  • Advanced Patterns: Fan-out/fan-in (parallel map-reduce via Send API) and subgraph nesting for composable workflows
  • Visual Workflow Editor: Launch Studio WebUI for visual editing, drag-and-drop node/edge management, and diff preview
  • Template Library: Quick-start templates for common patterns (branching, loops, RAG, parallel, subgraph, and more)
  • Golden Test (Regression Testing): Save execution traces as golden cases, then replay with mocked LLM responses to verify workflow structure after YAML changes — no API calls needed. Mock dispatch is resolved per node, so multiple nodes sharing the same handler name are replayed correctly. yagra golden save supports repeatable --strategy node_id:strategy overrides (exact/structural/skip/auto) to persist per-node comparison behavior. Available via CLI (yagra golden) and MCP tool (run_golden_tests)
  • MCP Server: Expose Yagra tools to AI agents via Model Context Protocol (yagra[mcp])
  • AI-Ready: JSON Schema export (yagra schema) and structured validation for coding agents

📦 Installation

  • Python 3.12+
# Recommended (uv)
uv add yagra

# With LLM handler utilities (optional)
uv add 'yagra[llm]'

# Or with pip
pip install yagra
pip install 'yagra[llm]'

LLM Handler Utilities (Beta)

Yagra provides handler utilities to reduce boilerplate code for LLM nodes:

from yagra.handlers import create_llm_handler

# Create a generic LLM handler
llm = create_llm_handler(retry=3, timeout=30)

# Register and use in workflow
registry = {"llm": llm}
app = Yagra.from_workflow("workflow.yaml", registry)

YAML Definition:

nodes:
  - id: "chat"
    handler: "llm"
    params:
      prompt_ref: "prompts/chat.yaml#system"
      model:
        provider: "openai"
        name: "gpt-4"
        kwargs:
          temperature: 0.7
      output_key: "response"

The handler automatically:

  • Extracts and interpolates prompts
  • Calls LLM via litellm (100+ providers)
  • Handles retries and timeouts
  • Returns structured output

See the full working example: examples/llm-basic/

Structured Output Handler (Beta)

Use create_structured_llm_handler() to get type-safe Pydantic model instances from LLM responses:

from pydantic import BaseModel
from yagra.handlers import create_structured_llm_handler

class PersonInfo(BaseModel):
    name: str
    age: int

handler = create_structured_llm_handler(schema=PersonInfo)
registry = {"structured_llm": handler}
app = Yagra.from_workflow("workflow.yaml", registry)

result = app.invoke({"text": "My name is Alice and I am 30."})
person: PersonInfo = result["person"]  # Type-safe!
print(person.name, person.age)  # Alice 30

The handler automatically:

  • Enables JSON output mode (response_format=json_object)
  • Injects JSON Schema into the system prompt
  • Validates and parses the response with Pydantic

Dynamic schema (no Python code required): Define the schema directly in your workflow YAML using schema_yaml, and call create_structured_llm_handler() with no arguments:

# No Pydantic model needed in Python code
handler = create_structured_llm_handler()
registry = {"structured_llm": handler}
# workflow.yaml
nodes:
  - id: "extract"
    handler: "structured_llm"
    params:
      schema_yaml: |
        name: str
        age: int
        hobbies: list[str]
      prompt_ref: "prompts.yaml#extract"
      model:
        provider: "openai"
        name: "gpt-4o"
      output_key: "person"

Supported types in schema_yaml: str, int, float, bool, list[str], list[int], dict[str, str], str | None, etc.

See the full working example: examples/llm-structured/

Streaming Handler (Beta)

Stream LLM responses chunk by chunk:

from yagra.handlers import create_streaming_llm_handler

handler = create_streaming_llm_handler(retry=3, timeout=60)
registry = {"streaming_llm": handler}

yagra = Yagra.from_workflow("workflow.yaml", registry)
result = yagra.invoke({"query": "Tell me about Python async"})

# Incremental processing
for chunk in result["response"]:
    print(chunk, end="", flush=True)

# Or buffered
full_text = "".join(result["response"])

Note: The Generator is single-use. Consume it once with either for or "".join(...).

See the full working example: examples/llm-streaming/

🚀 Quick Start

Option 1: From Template (Recommended)

Yagra provides ready-to-use templates for common workflow patterns.

# List available templates
yagra init --list

# Initialize from a template
yagra init --template branch --output my-workflow

# Validate the generated workflow
yagra validate --workflow my-workflow/workflow.yaml

Available templates:

  • branch: Conditional branching pattern
  • chat: Single-node chat with MessagesState and add_messages reducer
  • loop: Planner → Evaluator loop pattern
  • parallel: Fan-out/fan-in map-reduce pattern via Send API
  • rag: Retrieve → Rerank → Generate RAG pattern
  • subgraph: Nested subgraph pattern for composable multi-workflow architectures
  • tool-use: LLM decides whether to invoke external tools and executes them to answer
  • multi-agent: Orchestrator, researcher, and writer agents collaborate in a multi-agent pattern
  • human-review: Human-in-the-loop pattern that pauses for review and approval via interrupt_before

Option 2: From Scratch

1. Define State and Handler Functions

from typing import TypedDict
from yagra import Yagra


class AgentState(TypedDict, total=False):
    query: str
    intent: str
    answer: str
    __next__: str  # For conditional branching


def classify_intent(state: AgentState, params: dict) -> dict:
    intent = "faq" if "料金" in state.get("query", "") else "general"
    return {"intent": intent, "__next__": intent}


def answer_faq(state: AgentState, params: dict) -> dict:
    prompt = params.get("prompt", {})
    return {"answer": f"FAQ: {prompt.get('system', '')}"}


def answer_general(state: AgentState, params: dict) -> dict:
    model = params.get("model", {})
    return {"answer": f"GENERAL via {model.get('name', 'unknown')}"}


def finish(state: AgentState, params: dict) -> dict:
    return {"answer": state.get("answer", "")}

2. Define Workflow YAML

workflows/support.yaml

version: "1.0"
start_at: "classifier"
end_at:
  - "finish"

nodes:
  - id: "classifier"
    handler: "classify_intent"
  - id: "faq_bot"
    handler: "answer_faq"
    params:
      prompt_ref: "../prompts/support_prompts.yaml#faq"
  - id: "general_bot"
    handler: "answer_general"
    params:
      model:
        provider: "openai"
        name: "gpt-4.1-mini"
  - id: "finish"
    handler: "finish"

edges:
  - source: "classifier"
    target: "faq_bot"
    condition: "faq"
  - source: "classifier"
    target: "general_bot"
    condition: "general"
  - source: "faq_bot"
    target: "finish"
  - source: "general_bot"
    target: "finish"

3. Register Handlers and Run

registry = {
    "classify_intent": classify_intent,
    "answer_faq": answer_faq,
    "answer_general": answer_general,
    "finish": finish,
}

app = Yagra.from_workflow(
    workflow_path="workflows/support.yaml",
    registry=registry,
    state_schema=AgentState,
)

result = app.invoke({"query": "料金を教えて"})
print(result["answer"])

🔍 Observability (Public Trace API)

If you enable observability=True, Yagra stores the latest execution trace in memory.

app = Yagra.from_workflow(
    workflow_path="workflows/support.yaml",
    registry=registry,
    observability=True,
)

app.invoke({"query": "料金を教えて"}, trace=False)
last_trace = app.get_last_trace()  # WorkflowRunTrace | None
  • get_last_trace() returns None when observability=False or before the first invoke().
  • trace=True controls JSON file output only (.yagra/traces/ or trace_dir), and does not affect in-memory availability of get_last_trace().

🛠️ CLI Tools

Yagra provides CLI commands for workflow management:

yagra init

Initialize a workflow from a template.

yagra init --template branch --output my-workflow

yagra schema

Export JSON Schema for workflow YAML (useful for coding agents).

yagra schema --output workflow-schema.json

yagra validate

Validate a workflow YAML and report issues.

# Human-readable output
yagra validate --workflow workflows/support.yaml

# JSON output for agent consumption
yagra validate --workflow workflows/support.yaml --format json

yagra explain

Statically analyze a workflow YAML to show execution paths, required handlers, and variable flow.

# JSON output (default)
yagra explain --workflow workflows/support.yaml

# Read from stdin (pipe-friendly)
cat workflows/support.yaml | yagra explain --workflow -

yagra handlers

List built-in handler parameter schemas (useful for coding agents).

# Human-readable output
yagra handlers

# JSON output for agent consumption
yagra handlers --format json

yagra analyze

Aggregate and summarize execution traces from .yagra/traces/.

# Summarize all traces
yagra analyze

# Filter by workflow name, show 10 most recent traces
yagra analyze --workflow my-workflow --limit 10

# JSON output for agent consumption
yagra analyze --format json

yagra mcp

Launch Yagra as an MCP (Model Context Protocol) server. Requires yagra[mcp] extra.

# Install with MCP support
pip install "yagra[mcp]"
# or
uv add "yagra[mcp]"

# Start the MCP server (stdio mode)
yagra mcp

Available MCP tools: validate_workflow, validate_workflow_file, explain_workflow, explain_workflow_file, list_templates, list_handlers, get_template, get_traces, analyze_traces, propose_update, apply_update, rollback_update, run_golden_tests

yagra visualize

Generate a read-only visualization HTML.

yagra visualize --workflow workflows/support.yaml --output /tmp/workflow.html

yagra studio

Launch an interactive WebUI for visual editing, drag-and-drop node/edge management, and workflow persistence.

# Launch with workflow selector (recommended)
yagra studio --port 8787

# Launch with a specific workflow
yagra studio --workflow workflows/support.yaml --port 8787

Open http://127.0.0.1:8787/ in your browser.

Studio Features:

  • Handler Type Selector: Node Properties panel provides a type selector (llm / structured_llm / streaming_llm / custom)
    • Predefined types auto-populate the handler name — no manual typing required
    • custom type enables free-text input for user-defined handlers
  • Handler-Aware Forms: Form sections adapt automatically to the selected handler type
    • structured_llm → Schema Settings section (edit schema_yaml as YAML)
    • streaming_llm → Streaming Settings section (stream: false toggle)
    • custom → LLM-specific sections hidden automatically
  • State Schema Editor: Define workflow-level state_schema fields visually via a table editor (name, type, reducer columns) — no YAML hand-editing required
  • Visual Editing: Edit prompts, models, and conditions via forms
  • Drag & Drop: Add nodes, connect edges, adjust layout visually
  • Diff Preview: Review changes before saving
  • Backup & Rollback: Automatic backups with rollback support
  • Validation: Real-time validation with detailed error messages

📚 Documentation

Full documentation is available at shogo-hs.github.io/Yagra

You can also build documentation locally:

uv run sphinx-build -b html docs/sphinx/source docs/sphinx/_build/html

🎯 Use Cases

  • Prototype LLM agent flows and iterate rapidly by swapping YAML files
  • Enable non-engineers to adjust workflows (prompts, models, branching) without code changes
  • Integrate with coding agents for automated workflow generation and validation
  • Reduce boilerplate code when building LangGraph applications with complex control flow

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for development setup, coding standards, and guidelines.

📄 License

MIT License - see LICENSE for details.

📝 Changelog

See CHANGELOG.md for release history.


Built with ❤️ for the LangGraph community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yagra-1.0.1.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yagra-1.0.1-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file yagra-1.0.1.tar.gz.

File metadata

  • Download URL: yagra-1.0.1.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for yagra-1.0.1.tar.gz
Algorithm Hash digest
SHA256 780c3e39a8d60b059a87cb2c28fb768cde779ef30feccef84c4507264a4f79bd
MD5 4f8532e6dcceb331d9916cc7c3619664
BLAKE2b-256 98026cc083e58e9d76c3a70b15dc6dfd82de6072c9deb57438f41b1b98c9858b

See more details on using hashes here.

Provenance

The following attestation bundles were made for yagra-1.0.1.tar.gz:

Publisher: publish.yml on shogo-hs/Yagra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file yagra-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: yagra-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for yagra-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e954ebe582588bed9ea1d7a6014792abd6de3234813577e27db0698c1100e22f
MD5 c0a7057f5a03699debcdb63981d9483c
BLAKE2b-256 317824791af6fe86fba555a938ea2bae56f093bd2fbeba4a2cee328ad05561c2

See more details on using hashes here.

Provenance

The following attestation bundles were made for yagra-1.0.1-py3-none-any.whl:

Publisher: publish.yml on shogo-hs/Yagra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page