EYDII Verify tools and guardrails for CrewAI — verify every agent action before execution
Project description
crewai-eydii
Eydii tools and guardrails for CrewAI — verify every agent action before execution.
Why EYDII?
When AI agents act autonomously, you need a way to enforce rules that the agents themselves cannot override. EYDII sits between your agents and their actions — every sensitive operation is verified against your policies in real time, with a cryptographic proof trail. No more hoping the system prompt holds; Eydii gives you external, tamper-proof verification that works even when agents delegate to other agents.
Install
pip install crewai-eydii
This installs eydii_crewai and its dependencies (veritera SDK and crewai).
Prerequisites: Create a Policy
Before using EYDII with CrewAI, create a policy that defines what your agents are allowed to do. You only need to do this once:
from veritera import Eydii
eydii = Eydii(api_key="vt_live_...") # Get your key at id.veritera.ai
# Create a policy from code
eydii.create_policy_sync(
name="finance-controls",
description="Controls for multi-agent financial operations",
rules=[
{"type": "action_whitelist", "params": {"allowed": ["trade.execute", "refund.process", "report.generate"]}},
{"type": "amount_limit", "params": {"max": 10000, "currency": "USD"}},
],
)
# Or generate one from plain English
eydii.generate_policy_sync(
"Allow trades under $10,000, refund processing, and report generation. Block all account deletions and unauthorized data exports.",
save=True,
)
A default policy is created automatically when you sign up — it blocks dangerous actions like database drops and admin overrides. You can use it immediately with policy="default".
Tip:
pip install veriterato get the policy management SDK. See the full policy docs.
Quick Start
import os
from crewai import Agent, Task, Crew
from eydii_crewai import EydiiVerifyTool, eydii_task_guardrail
os.environ["VERITERA_API_KEY"] = "vt_live_..."
# 1. Create a EYDII verification tool
verify = EydiiVerifyTool(policy="finance-controls") # create this policy first (see above) -- or use "default"
# 2. Give it to your agent
analyst = Agent(
role="Financial Analyst",
goal="Process financial transactions safely",
tools=[verify],
)
# 3. Add a task guardrail for output validation
task = Task(
description="Process the refund for order #12345",
agent=analyst,
guardrail=eydii_task_guardrail(policy="finance-controls"),
guardrail_max_retries=3,
)
# 4. Run the crew — every action is verified, every output is validated
crew = Crew(agents=[analyst], tasks=[task])
result = crew.kickoff()
The agent calls eydii_verify before executing sensitive actions. If an action is denied, the agent receives a DENIED response and adjusts its plan. If the task output violates policy, CrewAI automatically retries the task up to guardrail_max_retries times.
Tutorial: Building a Verified Multi-Agent Research Crew
This walkthrough builds a three-agent crew where one agent gathers data, another analyzes it, and a third takes action — with EYDII protecting the entire pipeline.
The Problem with Multi-Agent Delegation
CrewAI's power is multi-agent collaboration. Agent A delegates to Agent B, which calls Agent C. But this is exactly where policies break down:
- System prompts drift — when Agent B receives a delegated task, the original guardrails from Agent A's system prompt no longer apply.
- Inline rules are invisible — Agent C has no idea what rules Agent A was supposed to follow.
- Chained actions compound risk — a data lookup (harmless) feeds an analysis (maybe harmless) that triggers a payment (definitely not harmless).
Eydii solves this by moving verification outside the agents. Every action, from every agent, hits the same external policy engine. No matter how deep the delegation chain goes, Eydii catches violations.
Step 1 — Set Up Your Environment
import os
from crewai import Agent, Task, Crew, Process
from eydii_crewai import (
EydiiVerifyTool,
eydii_task_guardrail,
eydii_before_llm,
eydii_after_llm,
)
os.environ["VERITERA_API_KEY"] = "vt_live_..."
os.environ["OPENAI_API_KEY"] = "sk-..."
Step 2 — Create the EYDII Verification Tool
Create a single verification tool that all agents will share. Every call goes through the same policy engine with the same rules.
verify = EydiiVerifyTool(
policy="research-ops", # your policy set in Eydii
agent_id="research-crew", # appears in your Eydii audit log
fail_closed=True, # deny if Eydii is unreachable
)
Step 3 — Define Three Agents
researcher = Agent(
role="Research Analyst",
goal="Gather comprehensive data on the target company",
backstory=(
"You are a senior research analyst. You search public sources, "
"financial databases, and news feeds to compile company profiles."
),
tools=[verify],
verbose=True,
)
strategist = Agent(
role="Strategy Analyst",
goal="Analyze research data and produce an investment recommendation",
backstory=(
"You are a strategy analyst who evaluates company data, identifies "
"risks, and produces clear buy/hold/sell recommendations with reasoning."
),
tools=[verify],
verbose=True,
)
executor = Agent(
role="Trade Executor",
goal="Execute approved trades within risk limits",
backstory=(
"You execute trades based on analyst recommendations. You MUST verify "
"every trade through EYDII before execution. No exceptions."
),
tools=[verify],
verbose=True,
)
All three agents receive the same EydiiVerifyTool. When the executor tries to place a trade, it calls eydii_verify(action="trade.execute", params='{"ticker": "AAPL", "amount": 50000}') — Eydii checks this against your research-ops policy and returns APPROVED or DENIED.
Step 4 — Define Tasks with Guardrails
research_task = Task(
description=(
"Research the company 'Acme Corp'. Gather recent financials, "
"news sentiment, and competitive positioning. Verify your data "
"sources through EYDII before including them."
),
expected_output="A structured company profile with verified data sources.",
agent=researcher,
guardrail=eydii_task_guardrail(policy="research-ops"),
guardrail_max_retries=2,
)
analysis_task = Task(
description=(
"Analyze the research profile and produce a recommendation. "
"Include risk assessment. Verify your recommendation parameters "
"through EYDII before finalizing."
),
expected_output="An investment recommendation with risk score and reasoning.",
agent=strategist,
guardrail=eydii_task_guardrail(policy="research-ops"),
guardrail_max_retries=2,
)
execution_task = Task(
description=(
"Based on the strategy recommendation, prepare and verify a trade. "
"You MUST call eydii_verify with action='trade.execute' before "
"executing any trade. Include ticker, amount, and direction."
),
expected_output="Trade execution confirmation with EYDII proof_id.",
agent=executor,
guardrail=eydii_task_guardrail(policy="research-ops"),
guardrail_max_retries=3,
)
Each task has its own guardrail. Even if an agent produces output that looks correct, Eydii validates the content against your policies. If the strategist recommends a position that exceeds your risk limits, the guardrail rejects the output and CrewAI retries the task.
Step 5 — Register LLM Hooks (Optional)
For maximum coverage, add LLM-level hooks. These intercept every model call across all agents — before the model runs and after it responds.
# Block any LLM call that violates policy (e.g., iteration limits, forbidden topics)
eydii_before_llm(policy="safety-controls", max_iterations=15)
# Audit every LLM response to the Eydii trail
eydii_after_llm(policy="audit-trail")
Step 6 — Assemble and Run the Crew
crew = Crew(
agents=[researcher, strategist, executor],
tasks=[research_task, analysis_task, execution_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print(result)
What Happens at Runtime
Here is the verification flow for this crew:
- Researcher gathers data. Each data source is verified through
EydiiVerifyToolbefore inclusion. The task guardrail validates the final profile output. - Strategist receives the research profile. Its recommendation is checked — if the position exceeds risk limits, the guardrail rejects the output and CrewAI retries.
- Executor receives the approved recommendation. It calls
eydii_verify(action="trade.execute", ...)before executing. Eydii checks amount limits, allowed tickers, and trading hours. If denied, the agent does not proceed. - LLM hooks run on every model call across all three agents — catching runaway iteration loops and logging every response to the audit trail.
Every verification produces a proof_id that links to a tamper-proof audit record in your Eydii dashboard.
Three Integration Points
1. EydiiVerifyTool — Agent Tool for Explicit Verification
The most direct integration. Give agents a tool they can call to check whether an action is allowed.
from eydii_crewai import EydiiVerifyTool
tool = EydiiVerifyTool(
policy="finance-controls",
agent_id="analyst-bot",
fail_closed=True,
)
agent = Agent(
role="Financial Analyst",
goal="Process transactions within policy limits",
tools=[tool],
)
How the agent uses it:
The agent calls eydii_verify(action="payment.create", params='{"amount": 500, "currency": "USD"}') and receives:
APPROVED: Allowed | proof_id: fp_abc123 | latency: 42ms— proceed with the action.DENIED: Amount exceeds $200 limit | proof_id: fp_def456 | Do NOT proceed with this action.— the agent adjusts its plan.
Constructor parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
VERITERA_API_KEY env var |
Your EYDII API key |
base_url |
str |
https://id.veritera.ai |
EYDII API endpoint |
agent_id |
str |
"crewai-agent" |
Identifier in audit logs |
policy |
str |
None |
Policy set to evaluate against |
fail_closed |
bool |
True |
Deny when API is unreachable |
timeout |
float |
10.0 |
Request timeout in seconds |
2. eydii_task_guardrail() — Task Output Validation
Wraps CrewAI's native guardrail system. After a task completes, Eydii validates the output. If the output violates policy, CrewAI automatically retries the task.
from eydii_crewai import eydii_task_guardrail
task = Task(
description="Draft a customer response about their refund request",
agent=support_agent,
guardrail=eydii_task_guardrail(
policy="communication-policy",
agent_id="support-bot",
),
guardrail_max_retries=3,
)
How it works:
- The agent completes the task and produces output.
- The guardrail sends the output (first 3,000 characters) and task description (first 500 characters) to EYDII.
- EYDII evaluates the content against your policy.
- If approved, the output passes through unchanged.
- If denied, CrewAI receives feedback (e.g., "EYDII policy violation: Response contains unauthorized discount offer. Please revise your output to comply with the policy.") and retries the task.
Factory parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
VERITERA_API_KEY env var |
Your EYDII API key |
base_url |
str |
https://id.veritera.ai |
EYDII API endpoint |
agent_id |
str |
"crewai-agent" |
Identifier in audit logs |
policy |
str |
None |
Policy set to evaluate against |
fail_closed |
bool |
True |
Reject output when API is unreachable |
3. eydii_before_llm() / eydii_after_llm() — LLM Call Hooks
Intercept at the lowest level. These hooks run on every LLM call across all agents in the crew.
from eydii_crewai import eydii_before_llm, eydii_after_llm
# Pre-call: block LLM calls that violate policy or exceed iteration limits
eydii_before_llm(
policy="safety-controls",
max_iterations=10, # hard stop after 10 iterations per task
agent_id="crew-monitor",
)
# Post-call: audit every LLM response (non-blocking)
eydii_after_llm(
policy="audit-trail",
agent_id="crew-monitor",
)
eydii_before_llm can block execution by returning False. Use it for:
- Iteration limits (stop runaway agent loops)
- Pre-call policy checks (block certain agents from certain tasks)
- Budget controls (stop after N calls)
eydii_after_llm is non-blocking. Use it for:
- Audit logging (every response hits the Eydii trail)
- Post-response policy evaluation
- Compliance recording
Parameters (both functions):
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
VERITERA_API_KEY env var |
Your EYDII API key |
base_url |
str |
https://id.veritera.ai |
EYDII API endpoint |
agent_id |
str |
"crewai-agent" |
Identifier in audit logs |
policy |
str |
None |
Policy set to evaluate against |
fail_closed |
bool |
True |
Block when API is unreachable (before_llm only) |
max_iterations |
int |
None |
Hard iteration limit (before_llm only) |
Note: LLM hooks require
crewai>=0.80. On older versions, a warning is logged and the hooks are skipped.
Configuration Reference
| Config | Source | Required | Example |
|---|---|---|---|
| API key | VERITERA_API_KEY env var or api_key= parameter |
Yes | vt_live_abc123 |
| Base URL | base_url= parameter |
No | https://id.veritera.ai |
| Policy | policy= parameter |
No (but recommended) | "finance-controls" |
| Agent ID | agent_id= parameter |
No | "my-crewai-agent" |
| Fail closed | fail_closed= parameter |
No (default: True) |
True or False |
| Timeout | timeout= parameter (EydiiVerifyTool only) |
No (default: 10.0) |
30.0 |
How It Works
┌─────────────────────────────────────────────────────────┐
│ Your CrewAI Crew │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Agent A │──▶│ Agent B │──▶│ Agent C │ │
│ │ Research │ │ Analysis │ │ Execution │ │
│ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ │
│ │ │ │ │
│ ┌────▼────┐ ┌────▼────┐ ┌────▼────┐ │
│ │ Tool │ │Guardrail│ │ Tool │ │
│ │ Call │ │ Check │ │ Call │ │
│ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │
└────────┼───────────────┼───────────────┼───────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────┐
│ EYDII Verify API │
│ │
│ Policy Engine │ Audit Trail │ Proof │
└─────────────────────────────────────────┘
- Agent calls tool —
EydiiVerifyTool.run(action, params)sends a verification request to the EYDII API. - EYDII evaluates — The policy engine checks the action and parameters against your defined policies.
- Result returned —
APPROVED(with proof ID) orDENIED(with reason and proof ID). - Agent decides — On approval, the agent proceeds. On denial, the agent adjusts its plan.
- Guardrail validates — After the task completes,
eydii_task_guardrailchecks the output. If denied, CrewAI retries. - LLM hooks monitor — Every model call is optionally checked (before) and logged (after).
- Audit trail recorded — Every verification produces a
proof_idlinking to a permanent, tamper-proof record.
Multi-Agent Security
Single-agent guardrails are straightforward — one agent, one set of rules. Multi-agent crews break this model:
The Delegation Problem
Agent A (has policy: "no trades over $10k")
└──▶ delegates to Agent B (has policy: ???)
└──▶ delegates to Agent C (has policy: ???)
└──▶ executes trade for $50k ← policy lost
When Agent A delegates to Agent B, the system prompt that contained Agent A's policy does not transfer. Agent B operates under its own system prompt. By the time Agent C executes, the original constraints are gone.
Eydii Fixes This
Agent A ──▶ eydii_verify("research.query") ✓ APPROVED
Agent B ──▶ eydii_verify("analysis.recommend") ✓ APPROVED
Agent C ──▶ eydii_verify("trade.execute", $50k) ✗ DENIED — exceeds $10k limit
Eydii policies are external to all agents. The same rules apply whether the action is initiated by the first agent or the fifth in a delegation chain. The policy lives in Eydii, not in any agent's system prompt.
Why This Matters for CrewAI Specifically
CrewAI supports Process.hierarchical where a manager agent delegates freely to workers. It supports allow_delegation=True where any agent can hand off to any other. These are powerful features — but they multiply the surface area for policy violations. Eydii gives you a single control plane across all of them.
Error Handling
The package handles three failure modes:
1. EYDII API Unreachable
Controlled by fail_closed:
# fail_closed=True (default) — deny when Eydii is down
tool = EydiiVerifyTool(policy="controls", fail_closed=True)
# Agent receives: "ERROR: Verification unavailable — ConnectionError(...)"
# fail_closed=False — allow when Eydii is down (use for non-critical paths)
tool = EydiiVerifyTool(policy="controls", fail_closed=False)
2. Invalid Parameters
If the agent passes malformed JSON as params, the tool wraps it safely:
# Agent calls: eydii_verify(action="test", params="not valid json")
# Tool parses it as: {"raw": "not valid json"} and proceeds with verification
3. Task Guardrail Failures
When the guardrail denies output, CrewAI receives structured feedback:
# Guardrail returns:
# (False, "EYDII policy violation: Response contains PII. Please revise your output to comply with the policy.")
# CrewAI retries the task with this feedback appended to the prompt
All errors are logged via Python's logging module under the eydii_crewai logger:
import logging
logging.getLogger("eydii_crewai").setLevel(logging.DEBUG)
Environment Variables
| Variable | Required | Description |
|---|---|---|
VERITERA_API_KEY |
Yes (unless passed via api_key=) |
Your EYDII API key. Get one at veritera.ai/dashboard. |
OPENAI_API_KEY |
Yes (for CrewAI's default LLM) | Your OpenAI key for the underlying language model. |
You can also pass the API key directly to avoid environment variables:
tool = EydiiVerifyTool(api_key="vt_live_...", policy="my-policy")
Other Eydii Integrations
Eydii works across the major agent frameworks. Use the same policies and audit trail regardless of which framework you choose.
| Framework | Package | Install |
|---|---|---|
| OpenAI Agents SDK | openai-eydii | pip install openai-eydii |
| LangGraph | langgraph-eydii | pip install langgraph-eydii |
| LlamaIndex | llamaindex-eydii | pip install llamaindex-eydii |
| Python SDK | veritera | pip install veritera |
| JavaScript SDK | @anthropic-ai/veritera | npm install veritera |
Resources
License
MIT — Eydii by Veritera AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file crewai_eydii-0.1.1.tar.gz.
File metadata
- Download URL: crewai_eydii-0.1.1.tar.gz
- Upload date:
- Size: 13.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
721a326bc645ffa2866045497e45818b65cd75150d2981ba877fc30f5c8a4d37
|
|
| MD5 |
1917b8214bd457de0913d05f30146402
|
|
| BLAKE2b-256 |
dce72a4c85950a3ce71216587bd21308ef24592d5e52fe03e8a2ff8f5fa629e7
|
File details
Details for the file crewai_eydii-0.1.1-py3-none-any.whl.
File metadata
- Download URL: crewai_eydii-0.1.1-py3-none-any.whl
- Upload date:
- Size: 15.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ce3421a6a79a40f418b7bec7ce6e4f18b39148a3149ae2caa94ab365e24314c2
|
|
| MD5 |
facbcc88afca74705ad8e0ce72b1b565
|
|
| BLAKE2b-256 |
672753e53bfa408a91c852819d7330e7c354d1500ae48d9f260dbcf2df1f2cc9
|