Skip to main content

Declarative LangGraph Builder powered by YAML

Project description

Yagra

Yagra logo

CI PyPI Python License Downloads

Declarative LangGraph Builder powered by YAML

Yagra enables you to build LangGraph's StateGraph from YAML definitions, separating workflow logic from Python implementation. Define nodes, edges, and branching conditions in YAML files—swap configurations without touching code.

Designed for LLM agent developers, prompt engineers, and non-technical stakeholders who want to iterate on workflows quickly without diving into Python code every time.

Built with AI-Native principles: JSON Schema export and validation CLI enable coding agents (Claude Code, Codex, etc.) to generate and validate workflows automatically.

✨ Key Features

  • Declarative Workflow Management: Define nodes, edges, and conditional branching in YAML
  • Implementation-Configuration Separation: Connect YAML handler strings to Python callables via Registry
  • Schema Validation: Catch configuration errors early with Pydantic-based validation
  • Visual Workflow Editor: Launch Studio WebUI for visual editing, drag-and-drop node/edge management, and diff preview
  • Template Library: Quick-start templates for common patterns (branching, loops, RAG)
  • AI-Ready: JSON Schema export (yagra schema) and structured validation for coding agents

📦 Installation

  • Python 3.12+
pip install yagra

# With LLM handler utilities (optional)
pip install 'yagra[llm]'

LLM Handler Utilities (Beta)

Yagra provides handler utilities to reduce boilerplate code for LLM nodes:

from yagra.handlers import create_llm_handler

# Create a generic LLM handler
llm = create_llm_handler(retry=3, timeout=30)

# Register and use in workflow
registry = {"llm": llm}
app = Yagra.from_workflow("workflow.yaml", registry)

YAML Definition:

nodes:
  - id: "chat"
    handler: "llm"
    params:
      prompt_ref: "prompts/chat.yaml#system"
      model:
        provider: "openai"
        name: "gpt-4"
        kwargs:
          temperature: 0.7
      output_key: "response"

The handler automatically:

  • Extracts and interpolates prompts
  • Calls LLM via litellm (100+ providers)
  • Handles retries and timeouts
  • Returns structured output

See the full working example: examples/llm-basic/

Structured Output Handler (Beta)

Use create_structured_llm_handler() to get type-safe Pydantic model instances from LLM responses:

from pydantic import BaseModel
from yagra.handlers import create_structured_llm_handler

class PersonInfo(BaseModel):
    name: str
    age: int

handler = create_structured_llm_handler(schema=PersonInfo)
registry = {"structured_llm": handler}
app = Yagra.from_workflow("workflow.yaml", registry)

result = app.invoke({"text": "My name is Alice and I am 30."})
person: PersonInfo = result["person"]  # Type-safe!
print(person.name, person.age)  # Alice 30

The handler automatically:

  • Enables JSON output mode (response_format=json_object)
  • Injects JSON Schema into the system prompt
  • Validates and parses the response with Pydantic

See the full working example: examples/llm-structured/

Streaming Handler (Beta)

Stream LLM responses chunk by chunk:

from yagra.handlers import create_streaming_llm_handler

handler = create_streaming_llm_handler(retry=3, timeout=60)
registry = {"streaming_llm": handler}

yagra = Yagra.from_workflow("workflow.yaml", registry)
result = yagra.invoke({"query": "Tell me about Python async"})

# Incremental processing
for chunk in result["response"]:
    print(chunk, end="", flush=True)

# Or buffered
full_text = "".join(result["response"])

Note: The Generator is single-use. Consume it once with either for or "".join(...).

See the full working example: examples/llm-streaming/

🚀 Quick Start

Option 1: From Template (Recommended)

Yagra provides ready-to-use templates for common workflow patterns.

# List available templates
yagra init --list

# Initialize from a template
yagra init --template branch --output my-workflow

# Validate the generated workflow
yagra validate --workflow my-workflow/workflow.yaml

Available templates:

  • branch: Conditional branching pattern
  • loop: Planner → Evaluator loop pattern
  • rag: Retrieve → Rerank → Generate RAG pattern

Option 2: From Scratch

1. Define State and Handler Functions

from typing import TypedDict
from yagra import Yagra


class AgentState(TypedDict, total=False):
    query: str
    intent: str
    answer: str
    __next__: str  # For conditional branching


def classify_intent(state: AgentState, params: dict) -> dict:
    intent = "faq" if "料金" in state.get("query", "") else "general"
    return {"intent": intent, "__next__": intent}


def answer_faq(state: AgentState, params: dict) -> dict:
    prompt = params.get("prompt", {})
    return {"answer": f"FAQ: {prompt.get('system', '')}"}


def answer_general(state: AgentState, params: dict) -> dict:
    model = params.get("model", {})
    return {"answer": f"GENERAL via {model.get('name', 'unknown')}"}


def finish(state: AgentState, params: dict) -> dict:
    return {"answer": state.get("answer", "")}

2. Define Workflow YAML

workflows/support.yaml

version: "1.0"
start_at: "classifier"
end_at:
  - "finish"

nodes:
  - id: "classifier"
    handler: "classify_intent"
  - id: "faq_bot"
    handler: "answer_faq"
    params:
      prompt_ref: "../prompts/support_prompts.yaml#faq"
  - id: "general_bot"
    handler: "answer_general"
    params:
      model:
        provider: "openai"
        name: "gpt-4.1-mini"
  - id: "finish"
    handler: "finish"

edges:
  - source: "classifier"
    target: "faq_bot"
    condition: "faq"
  - source: "classifier"
    target: "general_bot"
    condition: "general"
  - source: "faq_bot"
    target: "finish"
  - source: "general_bot"
    target: "finish"

3. Register Handlers and Run

registry = {
    "classify_intent": classify_intent,
    "answer_faq": answer_faq,
    "answer_general": answer_general,
    "finish": finish,
}

app = Yagra.from_workflow(
    workflow_path="workflows/support.yaml",
    registry=registry,
    state_schema=AgentState,
)

result = app.invoke({"query": "料金を教えて"})
print(result["answer"])

🛠️ CLI Tools

Yagra provides CLI commands for workflow management:

yagra init

Initialize a workflow from a template.

yagra init --template branch --output my-workflow

yagra schema

Export JSON Schema for workflow YAML (useful for coding agents).

yagra schema --output workflow-schema.json

yagra validate

Validate a workflow YAML and report issues.

# Human-readable output
yagra validate --workflow workflows/support.yaml

# JSON output for agent consumption
yagra validate --workflow workflows/support.yaml --format json

yagra visualize

Generate a read-only visualization HTML.

yagra visualize --workflow workflows/support.yaml --output /tmp/workflow.html

yagra studio

Launch an interactive WebUI for visual editing, drag-and-drop node/edge management, and workflow persistence.

# Launch with workflow selector (recommended)
yagra studio --port 8787

# Launch with a specific workflow
yagra studio --workflow workflows/support.yaml --port 8787

Open http://127.0.0.1:8787/ in your browser.

Studio Features:

  • Handler Type Selector: Node Properties panel provides a type selector (llm / structured_llm / streaming_llm / custom)
    • Predefined types auto-populate the handler name — no manual typing required
    • custom type enables free-text input for user-defined handlers
  • Handler-Aware Forms: Form sections adapt automatically to the selected handler type
    • structured_llm → Schema Settings section (edit schema_yaml as YAML)
    • streaming_llm → Streaming Settings section (stream: false toggle)
    • custom → LLM-specific sections hidden automatically
  • Visual Editing: Edit prompts, models, and conditions via forms
  • Drag & Drop: Add nodes, connect edges, adjust layout visually
  • Diff Preview: Review changes before saving
  • Backup & Rollback: Automatic backups with rollback support
  • Validation: Real-time validation with detailed error messages

📚 Documentation

Full documentation is available at shogo-hs.github.io/Yagra

You can also build documentation locally:

uv run sphinx-build -b html docs/sphinx/source docs/sphinx/_build/html

🎯 Use Cases

  • Prototype LLM agent flows and iterate rapidly by swapping YAML files
  • Enable non-engineers to adjust workflows (prompts, models, branching) without code changes
  • Integrate with coding agents for automated workflow generation and validation
  • Reduce boilerplate code when building LangGraph applications with complex control flow

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for development setup, coding standards, and guidelines.

📄 License

MIT License - see LICENSE for details.

📝 Changelog

See CHANGELOG.md for release history.


Built with ❤️ for the LangGraph community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yagra-0.4.8.tar.gz (1.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yagra-0.4.8-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file yagra-0.4.8.tar.gz.

File metadata

  • Download URL: yagra-0.4.8.tar.gz
  • Upload date:
  • Size: 1.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for yagra-0.4.8.tar.gz
Algorithm Hash digest
SHA256 b48fb5df02accec2e57ab2fd0f655cad7158390063dac88d592d77401f46b278
MD5 e64be6f4b68d3ba63b13d9e152fb56ff
BLAKE2b-256 83ef0f48228297a0e344449ef18e1b074194a63995daa924cd75c0de90d8e613

See more details on using hashes here.

Provenance

The following attestation bundles were made for yagra-0.4.8.tar.gz:

Publisher: publish.yml on shogo-hs/Yagra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file yagra-0.4.8-py3-none-any.whl.

File metadata

  • Download URL: yagra-0.4.8-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for yagra-0.4.8-py3-none-any.whl
Algorithm Hash digest
SHA256 98fb08ae9d69e81056cfb628b60feef6c91cac7727a73a5d57948f4bca975b67
MD5 df4d34b621d95a3fe6b5da93b06b5928
BLAKE2b-256 48c1090a64db0493d1a31cb4884956ad0ede2493055472cdbad2633334bb07fb

See more details on using hashes here.

Provenance

The following attestation bundles were made for yagra-0.4.8-py3-none-any.whl:

Publisher: publish.yml on shogo-hs/Yagra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page