LangChain tools for RecourseOS - evaluate consequences before destructive actions
Project description
langchain-recourse
LangChain tools for RecourseOS - evaluate consequences before your AI agent executes destructive actions.
Installation
pip install langchain-recourse
Requires: Node.js 18+ (for npx recourse-cli)
Quick Start
from langchain_recourse import RecourseToolkit
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
# Get all RecourseOS tools
toolkit = RecourseToolkit()
tools = toolkit.get_tools()
# Create your agent with consequence checking
llm = ChatOpenAI(model="gpt-4")
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
Tools
recourse_evaluate_terraform
Evaluate Terraform plans before terraform apply.
from langchain_recourse import RecourseEvaluateTerraform
tool = RecourseEvaluateTerraform()
result = tool.invoke({
"plan_json": '{"resource_changes": [...]}'
})
# Returns: "**Risk Assessment: BLOCK** ..."
recourse_evaluate_shell
Evaluate shell commands before execution.
from langchain_recourse import RecourseEvaluateShell
tool = RecourseEvaluateShell()
result = tool.invoke({
"command": "aws s3 rm s3://prod-data --recursive"
})
# Returns: "**Risk Assessment: BLOCK** ..."
recourse_evaluate_mcp
Evaluate MCP tool calls before invocation.
from langchain_recourse import RecourseEvaluateMCP
tool = RecourseEvaluateMCP()
result = tool.invoke({
"server": "aws",
"tool": "s3.delete_bucket",
"arguments": {"bucket": "prod-data"}
})
# Returns: "**Risk Assessment: ESCALATE** ..."
Risk Levels
| Level | Meaning | Agent Behavior |
|---|---|---|
ALLOW |
Safe to proceed | Execute normally |
WARN |
Recoverable but notable | Proceed with caution |
ESCALATE |
Needs human review | Ask user to confirm |
BLOCK |
Unrecoverable data loss | Do NOT proceed |
Agent Integration Example
Here's a complete example of an agent that checks consequences before destructive actions:
from langchain_recourse import RecourseToolkit, RecourseEvaluateShell
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents import create_tool_calling_agent, AgentExecutor
# System prompt that enforces consequence checking
system_prompt = """You are a helpful DevOps assistant.
IMPORTANT: Before executing ANY destructive command (delete, remove, drop, etc.),
you MUST first use the recourse_evaluate_shell tool to check consequences.
If the risk assessment is:
- BLOCK: Refuse to proceed. Explain why to the user.
- ESCALATE: Ask the user to explicitly confirm before proceeding.
- WARN: Inform the user of the risk, then proceed if they agree.
- ALLOW: Proceed normally.
Never skip the consequence check for destructive operations."""
prompt = ChatPromptTemplate.from_messages([
("system", system_prompt),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
# Setup
llm = ChatOpenAI(model="gpt-4")
tools = RecourseToolkit().get_tools()
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run
result = executor.invoke({
"input": "Delete the S3 bucket called prod-backups"
})
With Other Tools
Combine RecourseOS tools with your existing tools:
from langchain_recourse import RecourseToolkit
from langchain_community.tools import ShellTool
# Your tools + RecourseOS tools
my_tools = [ShellTool()]
recourse_tools = RecourseToolkit().get_tools()
all_tools = my_tools + recourse_tools
Configuration
The tools use npx recourse-cli@latest under the hood. Ensure:
- Node.js 18+ is installed
npxis in PATH- Network access to npm registry (first run downloads the CLI)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_recourse-0.1.0.tar.gz.
File metadata
- Download URL: langchain_recourse-0.1.0.tar.gz
- Upload date:
- Size: 7.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
564cbc06f52eb8ab4d4d9c7392b75b696156fa2bafba809cae9d9668dfdb31cc
|
|
| MD5 |
331bec346d7b1b96e81432f57061bb46
|
|
| BLAKE2b-256 |
3d86b376342fc591047ca170186edb436329a2cb5cbd446fdbd076bdcf2836ee
|
File details
Details for the file langchain_recourse-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langchain_recourse-0.1.0-py3-none-any.whl
- Upload date:
- Size: 6.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5cdb3dab2bcaa2a00ce9b234c4575cc5155dc8be84e2213351162fc629418a83
|
|
| MD5 |
bfe5a1dd9226bd088f5bd93f93a943e1
|
|
| BLAKE2b-256 |
94bc7f337de9291d55abb524032bf7fbcda2e20f679e50c42610d36420488797
|