Skip to main content

Declarative LangGraph Builder powered by YAML

Project description

Yagra

Yagra logo

CI PyPI Python License Downloads

Declarative LangGraph Builder powered by YAML

Yagra enables you to build LangGraph's StateGraph from YAML definitions, separating workflow logic from Python implementation. Define nodes, edges, and branching conditions in YAML files—swap configurations without touching code.

Designed for LLM agent developers, prompt engineers, and non-technical stakeholders who want to iterate on workflows quickly without diving into Python code every time.

Built with AI-Native principles: JSON Schema export and validation CLI enable coding agents (Claude Code, Codex, etc.) to generate and validate workflows automatically.

✨ Key Features

  • Declarative Workflow Management: Define nodes, edges, and conditional branching in YAML
  • Implementation-Configuration Separation: Connect YAML handler strings to Python callables via Registry
  • Schema Validation: Catch configuration errors early with Pydantic-based validation
  • Custom State Schema: Pass any TypedDict (including MessagesState) via state_schema — full LangGraph reducer support
  • Advanced Patterns: Fan-out/fan-in (parallel map-reduce via Send API) and subgraph nesting for composable workflows
  • Visual Workflow Editor: Launch Studio WebUI for visual editing, drag-and-drop node/edge management, and diff preview
  • Template Library: Quick-start templates for common patterns (branching, loops, RAG, parallel, subgraph, and more)
  • MCP Server: Expose Yagra tools to AI agents via Model Context Protocol (yagra[mcp])
  • AI-Ready: JSON Schema export (yagra schema) and structured validation for coding agents

📦 Installation

  • Python 3.12+
# Recommended (uv)
uv add yagra

# With LLM handler utilities (optional)
uv add 'yagra[llm]'

# Or with pip
pip install yagra
pip install 'yagra[llm]'

LLM Handler Utilities (Beta)

Yagra provides handler utilities to reduce boilerplate code for LLM nodes:

from yagra.handlers import create_llm_handler

# Create a generic LLM handler
llm = create_llm_handler(retry=3, timeout=30)

# Register and use in workflow
registry = {"llm": llm}
app = Yagra.from_workflow("workflow.yaml", registry)

YAML Definition:

nodes:
  - id: "chat"
    handler: "llm"
    params:
      prompt_ref: "prompts/chat.yaml#system"
      model:
        provider: "openai"
        name: "gpt-4"
        kwargs:
          temperature: 0.7
      output_key: "response"

The handler automatically:

  • Extracts and interpolates prompts
  • Calls LLM via litellm (100+ providers)
  • Handles retries and timeouts
  • Returns structured output

See the full working example: examples/llm-basic/

Structured Output Handler (Beta)

Use create_structured_llm_handler() to get type-safe Pydantic model instances from LLM responses:

from pydantic import BaseModel
from yagra.handlers import create_structured_llm_handler

class PersonInfo(BaseModel):
    name: str
    age: int

handler = create_structured_llm_handler(schema=PersonInfo)
registry = {"structured_llm": handler}
app = Yagra.from_workflow("workflow.yaml", registry)

result = app.invoke({"text": "My name is Alice and I am 30."})
person: PersonInfo = result["person"]  # Type-safe!
print(person.name, person.age)  # Alice 30

The handler automatically:

  • Enables JSON output mode (response_format=json_object)
  • Injects JSON Schema into the system prompt
  • Validates and parses the response with Pydantic

Dynamic schema (no Python code required): Define the schema directly in your workflow YAML using schema_yaml, and call create_structured_llm_handler() with no arguments:

# No Pydantic model needed in Python code
handler = create_structured_llm_handler()
registry = {"structured_llm": handler}
# workflow.yaml
nodes:
  - id: "extract"
    handler: "structured_llm"
    params:
      schema_yaml: |
        name: str
        age: int
        hobbies: list[str]
      prompt_ref: "prompts.yaml#extract"
      model:
        provider: "openai"
        name: "gpt-4o"
      output_key: "person"

Supported types in schema_yaml: str, int, float, bool, list[str], list[int], dict[str, str], str | None, etc.

See the full working example: examples/llm-structured/

Streaming Handler (Beta)

Stream LLM responses chunk by chunk:

from yagra.handlers import create_streaming_llm_handler

handler = create_streaming_llm_handler(retry=3, timeout=60)
registry = {"streaming_llm": handler}

yagra = Yagra.from_workflow("workflow.yaml", registry)
result = yagra.invoke({"query": "Tell me about Python async"})

# Incremental processing
for chunk in result["response"]:
    print(chunk, end="", flush=True)

# Or buffered
full_text = "".join(result["response"])

Note: The Generator is single-use. Consume it once with either for or "".join(...).

See the full working example: examples/llm-streaming/

🚀 Quick Start

Option 1: From Template (Recommended)

Yagra provides ready-to-use templates for common workflow patterns.

# List available templates
yagra init --list

# Initialize from a template
yagra init --template branch --output my-workflow

# Validate the generated workflow
yagra validate --workflow my-workflow/workflow.yaml

Available templates:

  • branch: Conditional branching pattern
  • chat: Single-node chat with MessagesState and add_messages reducer
  • loop: Planner → Evaluator loop pattern
  • parallel: Fan-out/fan-in map-reduce pattern via Send API
  • rag: Retrieve → Rerank → Generate RAG pattern
  • subgraph: Nested subgraph pattern for composable multi-workflow architectures
  • tool-use: LLM decides whether to invoke external tools and executes them to answer
  • multi-agent: Orchestrator, researcher, and writer agents collaborate in a multi-agent pattern
  • human-review: Human-in-the-loop pattern that pauses for review and approval via interrupt_before

Option 2: From Scratch

1. Define State and Handler Functions

from typing import TypedDict
from yagra import Yagra


class AgentState(TypedDict, total=False):
    query: str
    intent: str
    answer: str
    __next__: str  # For conditional branching


def classify_intent(state: AgentState, params: dict) -> dict:
    intent = "faq" if "料金" in state.get("query", "") else "general"
    return {"intent": intent, "__next__": intent}


def answer_faq(state: AgentState, params: dict) -> dict:
    prompt = params.get("prompt", {})
    return {"answer": f"FAQ: {prompt.get('system', '')}"}


def answer_general(state: AgentState, params: dict) -> dict:
    model = params.get("model", {})
    return {"answer": f"GENERAL via {model.get('name', 'unknown')}"}


def finish(state: AgentState, params: dict) -> dict:
    return {"answer": state.get("answer", "")}

2. Define Workflow YAML

workflows/support.yaml

version: "1.0"
start_at: "classifier"
end_at:
  - "finish"

nodes:
  - id: "classifier"
    handler: "classify_intent"
  - id: "faq_bot"
    handler: "answer_faq"
    params:
      prompt_ref: "../prompts/support_prompts.yaml#faq"
  - id: "general_bot"
    handler: "answer_general"
    params:
      model:
        provider: "openai"
        name: "gpt-4.1-mini"
  - id: "finish"
    handler: "finish"

edges:
  - source: "classifier"
    target: "faq_bot"
    condition: "faq"
  - source: "classifier"
    target: "general_bot"
    condition: "general"
  - source: "faq_bot"
    target: "finish"
  - source: "general_bot"
    target: "finish"

3. Register Handlers and Run

registry = {
    "classify_intent": classify_intent,
    "answer_faq": answer_faq,
    "answer_general": answer_general,
    "finish": finish,
}

app = Yagra.from_workflow(
    workflow_path="workflows/support.yaml",
    registry=registry,
    state_schema=AgentState,
)

result = app.invoke({"query": "料金を教えて"})
print(result["answer"])

🛠️ CLI Tools

Yagra provides CLI commands for workflow management:

yagra init

Initialize a workflow from a template.

yagra init --template branch --output my-workflow

yagra schema

Export JSON Schema for workflow YAML (useful for coding agents).

yagra schema --output workflow-schema.json

yagra validate

Validate a workflow YAML and report issues.

# Human-readable output
yagra validate --workflow workflows/support.yaml

# JSON output for agent consumption
yagra validate --workflow workflows/support.yaml --format json

yagra explain

Statically analyze a workflow YAML to show execution paths, required handlers, and variable flow.

# JSON output (default)
yagra explain --workflow workflows/support.yaml

# Read from stdin (pipe-friendly)
cat workflows/support.yaml | yagra explain --workflow -

yagra handlers

List built-in handler parameter schemas (useful for coding agents).

# Human-readable output
yagra handlers

# JSON output for agent consumption
yagra handlers --format json

yagra mcp

Launch Yagra as an MCP (Model Context Protocol) server. Requires yagra[mcp] extra.

# Install with MCP support
pip install "yagra[mcp]"
# or
uv add "yagra[mcp]"

# Start the MCP server (stdio mode)
yagra mcp

Available MCP tools: validate_workflow, explain_workflow, list_templates, list_handlers

yagra visualize

Generate a read-only visualization HTML.

yagra visualize --workflow workflows/support.yaml --output /tmp/workflow.html

yagra studio

Launch an interactive WebUI for visual editing, drag-and-drop node/edge management, and workflow persistence.

# Launch with workflow selector (recommended)
yagra studio --port 8787

# Launch with a specific workflow
yagra studio --workflow workflows/support.yaml --port 8787

Open http://127.0.0.1:8787/ in your browser.

Studio Features:

  • Handler Type Selector: Node Properties panel provides a type selector (llm / structured_llm / streaming_llm / custom)
    • Predefined types auto-populate the handler name — no manual typing required
    • custom type enables free-text input for user-defined handlers
  • Handler-Aware Forms: Form sections adapt automatically to the selected handler type
    • structured_llm → Schema Settings section (edit schema_yaml as YAML)
    • streaming_llm → Streaming Settings section (stream: false toggle)
    • custom → LLM-specific sections hidden automatically
  • State Schema Editor: Define workflow-level state_schema fields visually via a table editor (name, type, reducer columns) — no YAML hand-editing required
  • Visual Editing: Edit prompts, models, and conditions via forms
  • Drag & Drop: Add nodes, connect edges, adjust layout visually
  • Diff Preview: Review changes before saving
  • Backup & Rollback: Automatic backups with rollback support
  • Validation: Real-time validation with detailed error messages

📚 Documentation

Full documentation is available at shogo-hs.github.io/Yagra

You can also build documentation locally:

uv run sphinx-build -b html docs/sphinx/source docs/sphinx/_build/html

🎯 Use Cases

  • Prototype LLM agent flows and iterate rapidly by swapping YAML files
  • Enable non-engineers to adjust workflows (prompts, models, branching) without code changes
  • Integrate with coding agents for automated workflow generation and validation
  • Reduce boilerplate code when building LangGraph applications with complex control flow

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for development setup, coding standards, and guidelines.

📄 License

MIT License - see LICENSE for details.

📝 Changelog

See CHANGELOG.md for release history.


Built with ❤️ for the LangGraph community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yagra-0.6.8.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yagra-0.6.8-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file yagra-0.6.8.tar.gz.

File metadata

  • Download URL: yagra-0.6.8.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for yagra-0.6.8.tar.gz
Algorithm Hash digest
SHA256 cf507e3df0dc995a8889790e4a4d6cc545ab5ccff1c404fe1684d7d06141b42c
MD5 5edfc1c7c91f2803b581b9d5e7432a48
BLAKE2b-256 4d4fe84f430ef93f803908a81bb909bc78d1ff92eddc205c2864df75da65a52f

See more details on using hashes here.

Provenance

The following attestation bundles were made for yagra-0.6.8.tar.gz:

Publisher: publish.yml on shogo-hs/Yagra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file yagra-0.6.8-py3-none-any.whl.

File metadata

  • Download URL: yagra-0.6.8-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for yagra-0.6.8-py3-none-any.whl
Algorithm Hash digest
SHA256 1dc8216df2ea711ee97d96b6e6c06eb455117f0e9b1e28b7e725b62a4ceb5035
MD5 835c823dcb1b0d9210e4548283cf7328
BLAKE2b-256 6e5bd86910baeec13ccb4833150f7e9858cb8a46ebf972592ce438aac6f0a42d

See more details on using hashes here.

Provenance

The following attestation bundles were made for yagra-0.6.8-py3-none-any.whl:

Publisher: publish.yml on shogo-hs/Yagra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page