EYDII Verify tools for LlamaIndex — verify every agent action before execution
Project description
llama-index-tools-eydii
Eydii tools for LlamaIndex -- verify every agent action before execution.
Why EYDII?
LlamaIndex agents can query data, call APIs, send emails, write files, and update databases -- all autonomously. But autonomy without oversight is a liability. EYDII sits between your agent's decision and the real-world action, verifying every sensitive operation against your security policies before it executes. If the action violates policy, EYDII blocks it and logs the attempt. If it passes, Eydii returns a cryptographic proof for your audit trail.
Install
pip install llama-index-tools-eydii
This installs the EYDII verification tools alongside the core veritera SDK. You will also need a LlamaIndex LLM provider:
pip install llama-index-tools-eydii llama-index-llms-openai
Prerequisites: Create a Policy
Before using EYDII with LlamaIndex, create a policy that defines what your agent is allowed to do. You only need to do this once:
from veritera import Eydii
eydii = Eydii(api_key="vt_live_...") # Get your key at id.veritera.ai
# Create a policy from code
eydii.create_policy_sync(
name="finance-controls",
description="Controls for document agents with action capabilities",
rules=[
{"type": "action_whitelist", "params": {"allowed": ["email.send", "refund.process", "crm.update"]}},
{"type": "amount_limit", "params": {"max": 10000, "currency": "USD"}},
],
)
# Or generate one from plain English
eydii.generate_policy_sync(
"Allow sending emails, processing refunds under $10,000, and updating CRM records. Block bulk data exports and account deletions.",
save=True,
)
A default policy is created automatically when you sign up — it blocks dangerous actions like database drops and admin overrides. You can use it immediately with policy="default".
Tip:
pip install veriterato get the policy management SDK. See the full policy docs.
Quick Start
import os
from llama_index.core.agent import FunctionAgent
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from eydii_llamaindex import EydiiVerifyToolSpec
os.environ["VERITERA_API_KEY"] = "vt_live_..."
# Create EYDII verification tools
eydii = EydiiVerifyToolSpec(policy="finance-controls") # create this policy first (see above) -- or use "default"
eydii_tools = eydii.to_tool_list()
# Your application tools
def send_payment(amount: float, recipient: str) -> str:
"""Send a payment to a recipient."""
return f"Sent ${amount} to {recipient}"
app_tools = [FunctionTool.from_defaults(fn=send_payment)]
# Create agent with all tools
agent = FunctionAgent(
tools=eydii_tools + app_tools,
llm=OpenAI(model="gpt-4.1"),
system_prompt=(
"Before executing any sensitive action, ALWAYS call verify_action first. "
"Only proceed if the result is APPROVED."
),
)
response = await agent.run("Send $500 to vendor@acme.com")
print(response)
The agent will call verify_action before send_payment. If the amount exceeds your policy threshold, Eydii denies it and the agent explains why it cannot proceed.
Tutorial: Building a Verified Document Agent
This walkthrough builds a practical RAG + action agent -- an agent that reads documents AND takes real-world actions (sends emails, updates CRM records), with EYDII ensuring every action is authorized.
Step 1: Define your application tools
These are the tools your agent needs to do its job. Some are read-only (safe), others mutate state (dangerous).
from llama_index.core.tools import FunctionTool
# -- Read-only tools (low risk) --
def search_documents(query: str) -> str:
"""Search the company knowledge base for relevant documents."""
# In production, this would query a VectorStoreIndex
return (
"Policy DOC-2024-118: Refund requests over $1,000 require VP approval. "
"Requests under $1,000 may be processed by any support agent."
)
def lookup_customer(customer_id: str) -> str:
"""Look up a customer record by ID."""
return (
f"Customer {customer_id}: Acme Corp, tier=enterprise, "
f"account_manager=sarah@company.com, balance_due=$4,200"
)
# -- Write/action tools (high risk -- Eydii must verify these) --
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a customer or internal stakeholder."""
# Production: calls your email service API
return f"Email sent to {to}: '{subject}'"
def process_refund(customer_id: str, amount: float, reason: str) -> str:
"""Process a refund for a customer."""
return f"Refund of ${amount:.2f} processed for customer {customer_id}: {reason}"
def update_crm_record(customer_id: str, field: str, value: str) -> str:
"""Update a field on a customer's CRM record."""
return f"CRM updated: {customer_id}.{field} = {value}"
app_tools = [
FunctionTool.from_defaults(fn=search_documents),
FunctionTool.from_defaults(fn=lookup_customer),
FunctionTool.from_defaults(fn=send_email),
FunctionTool.from_defaults(fn=process_refund),
FunctionTool.from_defaults(fn=update_crm_record),
]
Step 2: Add EYDII verification tools
import os
from eydii_llamaindex import EydiiVerifyToolSpec
os.environ["VERITERA_API_KEY"] = "vt_live_..."
eydii = EydiiVerifyToolSpec(
agent_id="support-doc-agent",
policy="customer-support",
)
eydii_tools = eydii.to_tool_list()
This gives the agent three additional tools: verify_action, get_proof, and check_health.
Step 3: Build the agent with a verification-aware system prompt
The system prompt is critical. It tells the agent exactly when and how to use Eydii.
from llama_index.core.agent import FunctionAgent
from llama_index.llms.openai import OpenAI
SYSTEM_PROMPT = """\
You are a customer support agent with access to company documents and customer records.
VERIFICATION RULES -- follow these exactly:
1. Reading documents and looking up customers does NOT require verification.
2. Before calling send_email, process_refund, or update_crm_record, you MUST
call verify_action first with the action name and a JSON string of the parameters.
3. If verify_action returns APPROVED, proceed with the action.
4. If verify_action returns DENIED, do NOT execute the action. Explain the denial
to the user and suggest next steps (e.g., escalate to a manager).
5. After completing a sensitive action, note the proof_id for the audit trail.
Example verification call:
verify_action(action="process_refund", params='{"customer_id": "C-1001", "amount": 750, "reason": "defective product"}')
"""
agent = FunctionAgent(
tools=eydii_tools + app_tools,
llm=OpenAI(model="gpt-4.1"),
system_prompt=SYSTEM_PROMPT,
)
Step 4: Run the agent
import asyncio
async def main():
# Scenario 1: Small refund -- should be approved
response = await agent.run(
"Customer C-1001 (Acme Corp) wants a $400 refund for a defective shipment. "
"Look up their account, check our refund policy, process the refund, "
"and email the customer a confirmation."
)
print("--- Scenario 1 ---")
print(response)
# Scenario 2: Large refund -- should be denied by policy
response = await agent.run(
"Process a $5,000 refund for customer C-1001."
)
print("\n--- Scenario 2 ---")
print(response)
asyncio.run(main())
What happens under the hood
Scenario 1 (approved):
1. Agent calls search_documents("refund policy") --> reads policy (no verification needed)
2. Agent calls lookup_customer("C-1001") --> reads record (no verification needed)
3. Agent calls verify_action("process_refund", ...) --> Eydii returns APPROVED + proof_id
4. Agent calls process_refund("C-1001", 400, ...) --> executes the refund
5. Agent calls verify_action("send_email", ...) --> Eydii returns APPROVED + proof_id
6. Agent calls send_email("customer@acme.com", ...) --> sends confirmation
7. Agent responds with summary and proof IDs
Scenario 2 (denied):
1. Agent calls verify_action("process_refund", ...) --> Eydii returns DENIED: "amount exceeds $1,000 limit"
2. Agent does NOT call process_refund
3. Agent responds: "I'm unable to process this refund. The amount exceeds the $1,000
policy limit. Please escalate to a VP for approval."
Two Integration Points
Eydii for LlamaIndex provides two complementary approaches. Use one or both depending on your needs.
1. EydiiVerifyToolSpec -- explicit verification tools
EydiiVerifyToolSpec is a LlamaIndex BaseToolSpec that adds verification tools directly to your agent's toolbox. The agent decides when to call them based on your system prompt.
from eydii_llamaindex import EydiiVerifyToolSpec
spec = EydiiVerifyToolSpec(
api_key="vt_live_...", # or set VERITERA_API_KEY env var
agent_id="my-agent",
policy="finance-controls",
fail_closed=True,
)
tools = spec.to_tool_list()
Tools provided:
| Tool | Purpose |
|---|---|
verify_action(action, params) |
Check if an action is allowed by policy before executing it. Returns APPROVED or DENIED with a proof ID. |
get_proof(proof_id) |
Retrieve the full cryptographic proof record for a previous verification. Use for audits and compliance reporting. |
check_health() |
Test connectivity to the Eydii service. Useful for startup checks and monitoring dashboards. |
When to use: You want the agent to reason about verification explicitly. The agent sees the approval/denial and can adapt its behavior -- explaining denials to users, suggesting alternatives, or noting proof IDs in its response.
2. EydiiEventHandler -- automatic audit trail
EydiiEventHandler hooks into LlamaIndex's instrumentation system to intercept and verify every tool call automatically. No changes to your agent's prompt or tool list required.
from eydii_llamaindex import EydiiEventHandler
import llama_index.core.instrumentation as instrument
handler = EydiiEventHandler(
api_key="vt_live_...", # or set VERITERA_API_KEY env var
agent_id="my-agent",
policy="finance-controls",
block_on_deny=True, # raise ValueError on denied actions
fail_closed=True,
)
dispatcher = instrument.get_dispatcher()
dispatcher.add_event_handler(handler)
Behavior:
- Every tool call the agent makes fires an instrumentation event.
EydiiEventHandlerintercepts tool call events and sends them to EYDII for verification.- If
block_on_deny=Trueand Eydii denies the action, aValueErroris raised, preventing execution. - If
block_on_deny=False, denied actions are logged but still execute (audit-only mode). - All verifications (approved and denied) are recorded in your Eydii audit log.
When to use: You want a safety net that catches everything regardless of what the system prompt says. Useful as a defense-in-depth layer -- even if the agent skips the verify_action call, the event handler still catches and blocks unauthorized actions.
Using Both Together
For maximum protection, combine both integration points. The ToolSpec gives the agent awareness of verification (so it can communicate denials gracefully), while the EventHandler acts as a backstop that catches anything the agent misses.
import os
import llama_index.core.instrumentation as instrument
from llama_index.core.agent import FunctionAgent
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from eydii_llamaindex import EydiiVerifyToolSpec, EydiiEventHandler
os.environ["VERITERA_API_KEY"] = "vt_live_..."
# --- Layer 1: ToolSpec (agent-aware verification) ---
eydii_spec = EydiiVerifyToolSpec(
agent_id="billing-agent",
policy="billing-controls",
)
eydii_tools = eydii_spec.to_tool_list()
# --- Layer 2: EventHandler (automatic backstop) ---
handler = EydiiEventHandler(
agent_id="billing-agent",
policy="billing-controls",
block_on_deny=True,
)
dispatcher = instrument.get_dispatcher()
dispatcher.add_event_handler(handler)
# --- Application tools ---
def charge_customer(customer_id: str, amount: float) -> str:
"""Charge a customer's payment method."""
return f"Charged ${amount:.2f} to customer {customer_id}"
def issue_credit(customer_id: str, amount: float) -> str:
"""Issue a credit to a customer's account."""
return f"Issued ${amount:.2f} credit to customer {customer_id}"
app_tools = [
FunctionTool.from_defaults(fn=charge_customer),
FunctionTool.from_defaults(fn=issue_credit),
]
# --- Agent with dual protection ---
agent = FunctionAgent(
tools=eydii_tools + app_tools,
llm=OpenAI(model="gpt-4.1"),
system_prompt=(
"You are a billing agent. Before any charge or credit, call verify_action. "
"Only proceed if APPROVED. Report the proof_id in your response."
),
)
# Even if the LLM ignores the system prompt and calls charge_customer directly,
# the EydiiEventHandler will intercept and block unauthorized actions.
response = await agent.run("Charge customer C-5021 $12,000")
How the two layers interact:
| Scenario | ToolSpec | EventHandler | Result |
|---|---|---|---|
Agent calls verify_action first, gets APPROVED |
Tells agent "approved" | Sees charge_customer call, verifies, allows |
Action executes with two verification records |
Agent calls verify_action first, gets DENIED |
Tells agent "denied" | Never fires (agent stops) | Action blocked gracefully with explanation |
Agent skips verify_action, calls tool directly |
Not invoked | Intercepts tool call, verifies, blocks if denied | Safety net catches the gap |
Configuration Reference
EydiiVerifyToolSpec
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
None |
EYDII API key. Falls back to VERITERA_API_KEY env var. |
base_url |
str |
https://id.veritera.ai |
EYDII API endpoint. Override for self-hosted deployments. |
agent_id |
str |
llamaindex-agent |
Identifier for this agent in audit logs. Use a unique name per agent. |
policy |
str |
None |
Default policy to evaluate actions against. Can be overridden per call. |
fail_closed |
bool |
True |
If True, deny actions when the EYDII API is unreachable. Set to False for fail-open (not recommended for production). |
timeout |
float |
10.0 |
HTTP timeout in seconds for EYDII API calls. |
EydiiEventHandler
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
None |
EYDII API key. Falls back to VERITERA_API_KEY env var. |
base_url |
str |
https://id.veritera.ai |
EYDII API endpoint. Override for self-hosted deployments. |
agent_id |
str |
llamaindex-agent |
Identifier for this agent in audit logs. |
policy |
str |
None |
Policy to evaluate actions against. |
block_on_deny |
bool |
True |
If True, raise ValueError when an action is denied, preventing execution. Set to False for audit-only mode. |
fail_closed |
bool |
True |
If True, block actions when the EYDII API is unreachable. |
How It Works
User Request
|
v
+-----------+
| LlamaIndex |
| Agent |
+------+------+
|
(1) Agent decides to call send_email(...)
|
(2) verify_action("send_email", '{"to": "user@co.com"}')
| |
v |
+-------------+ |
| EYDII API | <-- evaluates against policy |
+------+------+ |
| |
APPROVED + proof_id |
| |
(3) Agent proceeds with send_email(...) |
| |
(4) EydiiEventHandler intercepts (backup) <----+
|
(5) Action executes
|
v
Audit log: proof_id, timestamp, action, verdict, agent_id
- The agent receives a user request and plans which tools to call.
- Following the system prompt, the agent calls
verify_actionwith the action name and parameters. - EYDII evaluates the action against your configured policy and returns
APPROVEDorDENIEDwith a cryptographic proof ID. - If approved, the agent calls the real tool. The
EydiiEventHandler(if configured) provides a second verification as a safety net. - Every verification is recorded in your Eydii audit log with a tamper-proof proof ID for compliance.
Error Handling
EYDII API unreachable
By default, both EydiiVerifyToolSpec and EydiiEventHandler operate in fail-closed mode. If the EYDII API is unreachable, actions are denied:
# ToolSpec returns an error string the agent can read
"ERROR: Verification unavailable -- ConnectionError: ..."
# EventHandler raises ValueError (if block_on_deny=True)
ValueError("Eydii: Action 'send_email' blocked -- verification unavailable.")
To switch to fail-open (not recommended for production):
spec = EydiiVerifyToolSpec(fail_closed=False)
handler = EydiiEventHandler(fail_closed=False, block_on_deny=False)
Invalid JSON in params
If the params argument to verify_action is not valid JSON, the tool gracefully wraps it:
# This still works -- the raw string is sent as {"raw": "some text"}
verify_action(action="email.send", params="not valid json")
Missing API key
A ValueError is raised immediately at initialization if no API key is found:
ValueError("EYDII API key required. Pass api_key= or set VERITERA_API_KEY env var.")
Environment Variables
| Variable | Required | Description |
|---|---|---|
VERITERA_API_KEY |
Yes (unless passed via api_key=) |
Your EYDII API key. Get one at veritera.ai/dashboard. |
OPENAI_API_KEY |
For OpenAI LLM | Required if using llama-index-llms-openai as your LLM provider. |
LlamaHub
This package follows the llama-index-tools-* naming convention for LlamaIndex community tool integrations. It is compatible with LlamaHub for discovery and can be installed directly from PyPI:
pip install llama-index-tools-eydii
The package registers the EydiiVerifyToolSpec tool spec and EydiiEventHandler instrumentation handler, both importable from eydii_llamaindex:
from eydii_llamaindex import EydiiVerifyToolSpec, EydiiEventHandler
Other Eydii Integrations
Eydii provides verification packages for all major agent frameworks:
| Framework | Package | Repository |
|---|---|---|
| OpenAI Agents SDK | eydii-openai |
GitHub |
| LangGraph | eydii-langgraph |
GitHub |
| CrewAI | eydii-crewai |
GitHub |
| Python SDK | veritera |
GitHub |
| JavaScript SDK | @veritera/sdk |
GitHub |
Learn more at veritera.ai/docs.
License
MIT -- Eydii by Veritera AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_index_tools_eydii-0.1.1.tar.gz.
File metadata
- Download URL: llama_index_tools_eydii-0.1.1.tar.gz
- Upload date:
- Size: 18.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
664215b18a5b9b599846d39935b7d2b703e08be4f38894149cac49e45beaa83e
|
|
| MD5 |
b8fe647150061bd7da3110530f315a34
|
|
| BLAKE2b-256 |
f44d87e444bb902216fc6efac130c514ff4219ebbe396ea6db2ad3d1bcceebcc
|
File details
Details for the file llama_index_tools_eydii-0.1.1-py3-none-any.whl.
File metadata
- Download URL: llama_index_tools_eydii-0.1.1-py3-none-any.whl
- Upload date:
- Size: 13.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3066eb26c09d21391450cf78a1fd335497dc45dbb418c72e2b4e5c83d271836c
|
|
| MD5 |
7f359773b038f5715b431333d13b8587
|
|
| BLAKE2b-256 |
2df9beb5396c6fa02222fbf4aeed799e1f9d3bd37c43417baa98813c3ceceea9
|