A tool for executing natural language recipe-like instructions
Project description
Recipe Executor
A Python execution engine for JSON-defined AI workflows - Recipe Executor runs structured "recipes" that combine file operations, LLM interactions, and control flow into automated workflows. Perfect for AI-powered content generation, file processing, and complex automation tasks.
What is Recipe Executor?
Recipe Executor is a pure execution engine that runs JSON "recipes" - structured workflow definitions that describe automated tasks. Think of it as a workflow engine specifically designed for AI-powered automation.
Key Features:
- 🤖 Multi-LLM Support - OpenAI, Anthropic, Azure OpenAI, Ollama
- 📁 File Operations - Read/write files with JSON/YAML parsing
- 🔄 Control Flow - Conditionals, loops, parallel execution
- 🛠️ Tool Integration - MCP (Model Context Protocol) server support
- 🎯 Context Management - Shared state across workflow steps
- ⚡ Concurrent Execution - Built-in parallelization and resource management
Quick Start
Installation
pip install recipe-executor
Basic Usage
- Create a recipe (JSON file):
{
"name": "summarize_file",
"steps": [
{
"step_type": "read_files",
"paths": ["{{ input_file }}"]
},
{
"step_type": "llm_generate",
"prompt": "Summarize this content:\n\n{{ file_contents[0] }}"
},
{
"step_type": "write_files",
"files": [
{
"path": "summary.md",
"content": "{{ llm_output }}"
}
]
}
]
}
- Execute the recipe:
recipe-executor recipe.json --context input_file=document.txt
Environment Setup
Configure your LLM providers via environment variables:
# OpenAI
export OPENAI_API_KEY="your-api-key"
# Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_BASE_URL="https://your-resource.openai.azure.com/"
Step Types
Recipe Executor provides 9 built-in step types:
File Operations
read_files- Read file content (supports JSON/YAML parsing, glob patterns)write_files- Write files to disk with automatic directory creation
LLM Integration
llm_generate- Generate content using various LLM providers- Supports structured output (JSON schemas, file specifications)
- MCP server integration for tool access
- Built-in web search capabilities
Control Flow
conditional- Branch execution based on boolean conditionsloop- Iterate over collections with optional concurrencyparallel- Execute multiple steps concurrentlyexecute_recipe- Execute nested recipes (composition)
Context Management
set_context- Set context variables and configurationmcp- Direct MCP server interactions
Example Recipes
AI-Powered Code Generation
{
"name": "generate_python_class",
"steps": [
{
"step_type": "llm_generate",
"prompt": "Create a Python class for {{ class_description }}",
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "code_generation",
"schema": {
"type": "object",
"properties": {
"code": {"type": "string"},
"explanation": {"type": "string"}
}
}
}
}
},
{
"step_type": "write_files",
"files": [
{
"path": "{{ class_name }}.py",
"content": "{{ llm_output.code }}"
}
]
}
]
}
Batch File Processing
{
"name": "process_documents",
"steps": [
{
"step_type": "read_files",
"paths": ["docs/*.txt"],
"use_glob": true
},
{
"step_type": "loop",
"items": "{{ file_contents }}",
"concurrency": 3,
"steps": [
{
"step_type": "llm_generate",
"prompt": "Extract key points from: {{ item }}"
},
{
"step_type": "write_files",
"files": [
{
"path": "summaries/summary_{{ loop_index }}.md",
"content": "{{ llm_output }}"
}
]
}
]
}
]
}
Advanced Features
LLM Provider Configuration
{
"step_type": "llm_generate",
"model": "gpt-4o",
"provider": "openai",
"max_tokens": 1000,
"temperature": 0.7,
"prompt": "Your prompt here"
}
MCP Server Integration
{
"step_type": "llm_generate",
"mcp_servers": [
{
"server_name": "web_search",
"command": "mcp-server-web-search",
"args": []
}
],
"tools": ["web_search"],
"prompt": "Search for information about {{ topic }}"
}
Conditional Execution
{
"step_type": "conditional",
"condition": "file_exists('config.json')",
"then_steps": [...],
"else_steps": [...]
}
CLI Reference
recipe-executor RECIPE_FILE [OPTIONS]
Options:
--context KEY=VALUE Context variables (can be used multiple times)
--config KEY=VALUE Configuration overrides (can be used multiple times)
--log-dir DIR Directory for log files (default: logs)
Examples:
# Basic execution
recipe-executor workflow.json
# With context variables
recipe-executor workflow.json --context input=data.txt output=results/
# With configuration overrides
recipe-executor workflow.json --config model=gpt-4o --config temperature=0.3
# Custom log directory
recipe-executor workflow.json --log-dir ./execution-logs
Python API
You can also use Recipe Executor programmatically:
import asyncio
from recipe_executor.executor import Executor
from recipe_executor.models import Recipe
from recipe_executor.context import Context
from recipe_executor.logger import init_logger
async def run_recipe():
# Load recipe
with open("recipe.json") as f:
recipe = Recipe.model_validate_json(f.read())
# Create context
context = Context(
artifacts={"input": "Hello World"},
config={"model": "gpt-4o"}
)
# Execute
logger = init_logger("logs")
executor = Executor(logger)
await executor.execute(recipe, context)
asyncio.run(run_recipe())
Error Handling
Recipe Executor provides comprehensive error handling:
- Step-level isolation - Errors in one step don't break the entire workflow
- Detailed logging - Structured logs with step-by-step execution details
- Graceful failures - Clear error messages with context information
- Resource cleanup - Automatic cleanup of temporary resources
Part of Recipe Tool Ecosystem
Recipe Executor is the core execution engine of the larger Recipe Tool ecosystem:
- recipe-tool - CLI for creating and executing recipes from natural language
- recipe-executor - This package - pure execution engine
- Document Generator App - Web UI for document workflows
- MCP Servers - Integration with AI assistants like Claude
For more examples and advanced usage patterns, visit the Recipe Tool repository.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
This is an experimental project from Microsoft. For issues and examples:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file recipe_executor-0.1.3.tar.gz.
File metadata
- Download URL: recipe_executor-0.1.3.tar.gz
- Upload date:
- Size: 30.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9858eb1402692ab97d003bc734196ba85f29a46ec06f25277035e06e227d8462
|
|
| MD5 |
bb4dc503fb73a6d04a9ad62bd6e9d44e
|
|
| BLAKE2b-256 |
4a60b3cd831c902a1ee0578595509b9d5fd5df906c13a1bbe0e1ecda15c5acb6
|
File details
Details for the file recipe_executor-0.1.3-py3-none-any.whl.
File metadata
- Download URL: recipe_executor-0.1.3-py3-none-any.whl
- Upload date:
- Size: 44.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44a8b4dbd4906312b908cc9191576fe311ff937caa33cf87a42c93e0dbaa39ef
|
|
| MD5 |
46dd4d15f4d471ff87244a84ee9609cd
|
|
| BLAKE2b-256 |
d010da5b4b42c17431cd532c89dcfde5c4f3167d82531fd3817d57828c8b4cc6
|