LlamaIndex tools for RecourseOS - evaluate consequences before destructive actions
Project description
llama-recourse
LlamaIndex tools for RecourseOS - evaluate consequences before your AI agent executes destructive actions.
Installation
pip install llama-recourse
Requires: Node.js 18+ (for npx recourse-cli)
Quick Start
from llama_recourse import get_recourse_tools
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
# Get RecourseOS tools
tools = get_recourse_tools()
# Create agent with consequence checking
llm = OpenAI(model="gpt-4")
agent = ReActAgent.from_tools(
tools,
llm=llm,
verbose=True,
system_prompt="""Before any destructive action, use recourse_evaluate_* tools.
If risk is BLOCK, refuse to proceed. If ESCALATE, ask user to confirm."""
)
response = agent.chat("Delete the S3 bucket prod-backups")
Tools
recourse_evaluate_terraform
Evaluate Terraform plans before terraform apply.
from llama_recourse import recourse_evaluate_terraform
result = recourse_evaluate_terraform(
plan_json='{"resource_changes": [...]}',
state_json=None # optional
)
print(result)
# **Risk Assessment: BLOCK**
# ...
recourse_evaluate_shell
Evaluate shell commands before execution.
from llama_recourse import recourse_evaluate_shell
result = recourse_evaluate_shell("aws s3 rm s3://prod-data --recursive")
print(result)
# **Risk Assessment: BLOCK**
# ...
recourse_evaluate_mcp
Evaluate MCP tool calls before invocation.
from llama_recourse import recourse_evaluate_mcp
result = recourse_evaluate_mcp(
server="aws",
tool="s3.delete_bucket",
arguments={"bucket": "prod-data"}
)
print(result)
# **Risk Assessment: ESCALATE**
# ...
Full Agent Example
from llama_recourse import get_recourse_tools
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
# Your existing tools
def execute_shell(command: str) -> str:
"""Execute a shell command."""
import subprocess
result = subprocess.run(command, shell=True, capture_output=True, text=True)
return result.stdout or result.stderr
shell_tool = FunctionTool.from_defaults(
fn=execute_shell,
name="execute_shell",
description="Execute a shell command"
)
# Combine with RecourseOS tools
all_tools = [shell_tool] + get_recourse_tools()
# Create safety-aware agent
agent = ReActAgent.from_tools(
all_tools,
llm=OpenAI(model="gpt-4"),
verbose=True,
system_prompt="""You are a DevOps assistant.
CRITICAL: Before using execute_shell with ANY destructive command,
you MUST first use recourse_evaluate_shell to check consequences.
Based on the risk assessment:
- BLOCK: Refuse to proceed. Explain the danger.
- ESCALATE: Ask for explicit user confirmation.
- WARN: Inform user of risks, proceed if they agree.
- ALLOW: Proceed normally."""
)
# The agent will now check consequences before destructive actions
response = agent.chat("Remove all files from /tmp/old-backups")
print(response)
With Query Engine Tools
from llama_recourse import get_recourse_tools
from llama_index.core.agent import ReActAgent
from llama_index.core.tools import QueryEngineTool
# Your query engine
query_tool = QueryEngineTool.from_defaults(
query_engine=index.as_query_engine(),
name="docs_search",
description="Search documentation"
)
# Add RecourseOS for safety
tools = [query_tool] + get_recourse_tools()
agent = ReActAgent.from_tools(tools, llm=llm)
Risk Levels
| Level | Meaning | Agent Behavior |
|---|---|---|
ALLOW |
Safe to proceed | Execute normally |
WARN |
Recoverable but notable | Proceed with caution |
ESCALATE |
Needs human review | Ask user to confirm |
BLOCK |
Unrecoverable data loss | Do NOT proceed |
Requirements
- Python 3.9+
- Node.js 18+ (for
npx recourse-cli) llama-index-core>=0.10.0
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_recourse-0.1.0.tar.gz.
File metadata
- Download URL: llama_recourse-0.1.0.tar.gz
- Upload date:
- Size: 5.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dd5a48a08b96c758227b6538006baf87a7c677c49cc7c3ffe4d56f8c5f9e5f0b
|
|
| MD5 |
2a7c1eacd34bcc232e42c80ee3759b50
|
|
| BLAKE2b-256 |
c099634dae5169c4d2efa241cfd3b465c8e29322aab072760aba8c4ff1727fca
|
File details
Details for the file llama_recourse-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llama_recourse-0.1.0-py3-none-any.whl
- Upload date:
- Size: 5.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
71d950681360d894ac668cfc1ad5f937263bffc472c2f43c6fe29069f98baf1f
|
|
| MD5 |
1c4b240c50e410b789dce1e13e848046
|
|
| BLAKE2b-256 |
bec50a8134d744b0fd12d6032c06a2e24472619c44efdcdd88a80270298f2867
|