Skip to main content

A tool for executing natural language recipe-like instructions

Project description

Recipe Executor

A Python execution engine for JSON-defined AI workflows - Recipe Executor runs structured "recipes" that combine file operations, LLM interactions, and control flow into automated workflows. Perfect for AI-powered content generation, file processing, and complex automation tasks.

License Python 3.11+

What is Recipe Executor?

Recipe Executor is a pure execution engine that runs JSON "recipes" - structured workflow definitions that describe automated tasks. Think of it as a workflow engine specifically designed for AI-powered automation.

Key Features:

  • 🤖 Multi-LLM Support - OpenAI, Anthropic, Azure OpenAI, Ollama
  • 📁 File Operations - Read/write files with JSON/YAML parsing
  • 🔄 Control Flow - Conditionals, loops, parallel execution
  • 🛠️ Tool Integration - MCP (Model Context Protocol) server support
  • 🎯 Context Management - Shared state across workflow steps
  • Concurrent Execution - Built-in parallelization and resource management

Quick Start

Installation

pip install recipe-executor

Basic Usage

  1. Create a recipe (JSON file):
{
  "name": "summarize_file",
  "steps": [
    {
      "step_type": "read_files",
      "paths": ["{{ input_file }}"]
    },
    {
      "step_type": "llm_generate", 
      "prompt": "Summarize this content:\n\n{{ file_contents[0] }}"
    },
    {
      "step_type": "write_files",
      "files": [
        {
          "path": "summary.md",
          "content": "{{ llm_output }}"
        }
      ]
    }
  ]
}
  1. Execute the recipe:
recipe-executor recipe.json --context input_file=document.txt

Environment Setup

Configure your LLM providers via environment variables:

# OpenAI
export OPENAI_API_KEY="your-api-key"

# Anthropic  
export ANTHROPIC_API_KEY="your-api-key"

# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_BASE_URL="https://your-resource.openai.azure.com/"

Step Types

Recipe Executor provides 9 built-in step types:

File Operations

  • read_files - Read file content (supports JSON/YAML parsing, glob patterns)
  • write_files - Write files to disk with automatic directory creation

LLM Integration

  • llm_generate - Generate content using various LLM providers
    • Supports structured output (JSON schemas, file specifications)
    • MCP server integration for tool access
    • Built-in web search capabilities

Control Flow

  • conditional - Branch execution based on boolean conditions
  • loop - Iterate over collections with optional concurrency
  • parallel - Execute multiple steps concurrently
  • execute_recipe - Execute nested recipes (composition)

Context Management

  • set_context - Set context variables and configuration
  • mcp - Direct MCP server interactions

Example Recipes

AI-Powered Code Generation

{
  "name": "generate_python_class",
  "steps": [
    {
      "step_type": "llm_generate",
      "prompt": "Create a Python class for {{ class_description }}",
      "response_format": {
        "type": "json_schema",
        "json_schema": {
          "name": "code_generation",
          "schema": {
            "type": "object",
            "properties": {
              "code": {"type": "string"},
              "explanation": {"type": "string"}
            }
          }
        }
      }
    },
    {
      "step_type": "write_files",
      "files": [
        {
          "path": "{{ class_name }}.py", 
          "content": "{{ llm_output.code }}"
        }
      ]
    }
  ]
}

Batch File Processing

{
  "name": "process_documents",
  "steps": [
    {
      "step_type": "read_files",
      "paths": ["docs/*.txt"],
      "use_glob": true
    },
    {
      "step_type": "loop",
      "items": "{{ file_contents }}",
      "concurrency": 3,
      "steps": [
        {
          "step_type": "llm_generate",
          "prompt": "Extract key points from: {{ item }}"
        },
        {
          "step_type": "write_files", 
          "files": [
            {
              "path": "summaries/summary_{{ loop_index }}.md",
              "content": "{{ llm_output }}"
            }
          ]
        }
      ]
    }
  ]
}

Advanced Features

LLM Provider Configuration

{
  "step_type": "llm_generate",
  "model": "gpt-4o",
  "provider": "openai",
  "max_tokens": 1000,
  "temperature": 0.7,
  "prompt": "Your prompt here"
}

MCP Server Integration

{
  "step_type": "llm_generate",
  "mcp_servers": [
    {
      "server_name": "web_search",
      "command": "mcp-server-web-search",
      "args": []
    }
  ],
  "tools": ["web_search"],
  "prompt": "Search for information about {{ topic }}"
}

Conditional Execution

{
  "step_type": "conditional",
  "condition": "file_exists('config.json')",
  "then_steps": [...],
  "else_steps": [...]
}

CLI Reference

recipe-executor RECIPE_FILE [OPTIONS]

Options:
  --context KEY=VALUE    Context variables (can be used multiple times)
  --config KEY=VALUE     Configuration overrides (can be used multiple times)  
  --log-dir DIR         Directory for log files (default: logs)

Examples:

# Basic execution
recipe-executor workflow.json

# With context variables
recipe-executor workflow.json --context input=data.txt output=results/

# With configuration overrides
recipe-executor workflow.json --config model=gpt-4o --config temperature=0.3

# Custom log directory
recipe-executor workflow.json --log-dir ./execution-logs

Python API

You can also use Recipe Executor programmatically:

import asyncio
from recipe_executor.executor import Executor
from recipe_executor.models import Recipe
from recipe_executor.context import Context
from recipe_executor.logger import init_logger

async def run_recipe():
    # Load recipe
    with open("recipe.json") as f:
        recipe = Recipe.model_validate_json(f.read())
    
    # Create context
    context = Context(
        artifacts={"input": "Hello World"},
        config={"model": "gpt-4o"}
    )
    
    # Execute
    logger = init_logger("logs")
    executor = Executor(logger)
    await executor.execute(recipe, context)

asyncio.run(run_recipe())

Error Handling

Recipe Executor provides comprehensive error handling:

  • Step-level isolation - Errors in one step don't break the entire workflow
  • Detailed logging - Structured logs with step-by-step execution details
  • Graceful failures - Clear error messages with context information
  • Resource cleanup - Automatic cleanup of temporary resources

Part of Recipe Tool Ecosystem

Recipe Executor is the core execution engine of the larger Recipe Tool ecosystem:

  • recipe-tool - CLI for creating and executing recipes from natural language
  • recipe-executor - This package - pure execution engine
  • Document Generator App - Web UI for document workflows
  • MCP Servers - Integration with AI assistants like Claude

For more examples and advanced usage patterns, visit the Recipe Tool repository.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

This is an experimental project from Microsoft. For issues and examples:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

recipe_executor-0.1.2.tar.gz (30.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

recipe_executor-0.1.2-py3-none-any.whl (44.8 kB view details)

Uploaded Python 3

File details

Details for the file recipe_executor-0.1.2.tar.gz.

File metadata

  • Download URL: recipe_executor-0.1.2.tar.gz
  • Upload date:
  • Size: 30.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.5

File hashes

Hashes for recipe_executor-0.1.2.tar.gz
Algorithm Hash digest
SHA256 641c83e709df99520bf9d4f2b91ada2cef4ee81a21c0e7788fb0b820c4e8fe8e
MD5 bd80fb6d0982b7f2e1f50627391baede
BLAKE2b-256 1ebd27547d69f8b95d40482e906eb198674d993e0238cd8c863d0b77a7a30012

See more details on using hashes here.

File details

Details for the file recipe_executor-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for recipe_executor-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4d178be722b3622ff59c0ae5ac0dd66e3d64e16652e494ca56eb159da59c81e3
MD5 0691ea604e989a1ca38e850985c93b23
BLAKE2b-256 3da6a4624bf77cd7d76e5ce4f5223e3ad848f9de2a1a2919d896da9be6f4aa1a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page