LangChain integration for Velatir - AI governance, compliance, and human-in-the-loop workflows
Project description
langchain-velatir
AI Governance, Compliance, and Human-in-the-Loop for LangChain
Official LangChain integration for Velatir - Add enterprise-grade governance, compliance checking, and human approval workflows to your LangChain agents.
Features
- 🛡️ Compliance Guardrails: Automatically validate agent responses against GDPR, EU AI Act, Bias & Fairness, and Prompt Injection policies
- 👥 Human-in-the-Loop: Require human approval for sensitive operations before execution
- 📊 Full Audit Trail: All decisions logged in Velatir dashboard with complete context
- 🔄 Multi-Channel Approvals: Receive approval requests via Slack, Microsoft Teams, Email, or Web UI
- ⚡ Easy Integration: Drop-in middleware that works with existing LangChain agents
- 🎯 Flexible Policies: Configure which tools need approval and which policies to enforce
Installation
pip install langchain-velatir
Requirements:
- Python 3.10+
- LangChain 1.0 alpha or later
- Velatir account and API key (sign up here)
Quick Start
Guardrails Example
Add governance to your agent responses. Velatir automatically evaluates responses against your configured policies:
from langchain_velatir import VelatirGuardrailMiddleware
from langchain.agents import create_react_agent
# Create guardrail middleware
# Policies (GDPR, EU AI Act, Bias & Fairness, etc.) are configured in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key="your-velatir-api-key",
mode="blocking", # Block responses that Velatir denies
)
# Add to your agent
agent = create_react_agent(
model,
tools,
middleware=[guardrails]
)
Human-in-the-Loop Example
Send tool calls to Velatir for evaluation. Velatir determines if human approval is needed based on your configured flows:
from langchain_velatir import VelatirHITLMiddleware
from langchain.agents import create_react_agent
# Create HITL middleware
# Approval flows and routing are configured in Velatir dashboard
hitl = VelatirHITLMiddleware(
api_key="your-velatir-api-key",
polling_interval=5.0,
timeout=600.0, # 10 minutes max wait
require_approval_for=["delete_user", "execute_payment"], # Optional filter
)
# Add to your agent
agent = create_react_agent(
model,
tools,
middleware=[hitl]
)
Combined Guardrails + HITL
Use both for complete governance. All policies and flows are configured in your Velatir dashboard:
from langchain_velatir import VelatirGuardrailMiddleware, VelatirHITLMiddleware
# Guardrails evaluate responses AFTER agent generates them
guardrails = VelatirGuardrailMiddleware(
api_key="your-api-key",
mode="blocking",
)
# HITL evaluates tool calls BEFORE execution
hitl = VelatirHITLMiddleware(
api_key="your-api-key",
require_approval_for=["process_payment", "delete_data"], # Optional filter
)
# Add both to your agent
agent = create_react_agent(
model,
tools,
middleware=[hitl, guardrails] # Order matters: HITL first, then guardrails
)
How It Works
VelatirGuardrailMiddleware
Follows the pattern of LangChain's SafetyGuardrailMiddleware:
- Uses
after_agenthook to intercept agent responses - Sends responses to Velatir API for evaluation
- Velatir's backend evaluates against your configured policies and flows:
- GDPR compliance checking
- EU AI Act requirements
- Bias & Fairness detection
- Prompt Injection prevention
- Custom policies you've configured
- Velatir returns decision (approved/denied/requires intervention)
- Middleware blocks or logs based on mode
Policy Configuration: All policies are configured in your Velatir dashboard, not in code. This allows non-technical stakeholders to manage compliance requirements without code changes.
Modes:
blocking- Block responses that Velatir denies (default)logging- Log Velatir's decisions but allow execution
VelatirHITLMiddleware
Implements human-in-the-loop approval workflows:
- Uses
modify_model_requesthook to intercept tool calls - Sends tool calls to Velatir API for evaluation
- Velatir's backend evaluates tool calls against your configured flows:
- Determines risk level
- Decides if human approval is needed
- Routes to appropriate reviewers (Slack, Teams, Email, Web)
- May approve instantly for low-risk actions
- Pauses execution if human review is required
- Polls for Velatir's decision
- Executes or blocks based on decision
Flow Configuration: All flows (when to require approval, who to route to, how many approvals, escalation paths) are configured in your Velatir dashboard. You can update flows without changing code.
Decision Types:
- ✅ Approved - Tool executes normally (may be instant or after human review)
- ❌ Rejected - Tool execution blocked, raises
VelatirApprovalDeniedError - 📝 Change Requested - Feedback provided, execution blocked
Configuration
Guardrail Middleware Options
VelatirGuardrailMiddleware(
api_key="your-api-key", # Required: Velatir API key
mode="blocking", # "blocking" or "logging"
base_url=None, # Optional: Custom API URL
timeout=10.0, # API request timeout in seconds
approval_timeout=30.0, # Max wait for Velatir decision
polling_interval=2.0, # Seconds between polling
blocked_message="Response requires review...", # Message shown when blocked
metadata={}, # Optional metadata for all tasks
)
HITL Middleware Options
VelatirHITLMiddleware(
api_key="your-api-key", # Required: Velatir API key
base_url=None, # Optional: Custom API URL
polling_interval=5.0, # Seconds between polling
timeout=600.0, # Max wait time for approval
require_approval_for=["tool1"], # Optional: filter which tools to send (None = all)
metadata={}, # Optional metadata for all tasks
)
Error Handling
The middleware raises custom exceptions for different scenarios:
from langchain_velatir import (
VelatirPolicyViolationError,
VelatirApprovalDeniedError,
VelatirTimeoutError,
)
try:
result = agent.invoke({"input": "Process customer data"})
except VelatirPolicyViolationError as e:
print(f"Policy violation: {e.violated_policies}")
print(f"Review task: {e.review_task_id}")
except VelatirApprovalDeniedError as e:
print(f"Approval denied: {e.requested_change}")
print(f"Review task: {e.review_task_id}")
except VelatirTimeoutError as e:
print(f"Timeout after {e.timeout_seconds}s")
print(f"Review task: {e.review_task_id}")
Examples
See the examples/ directory for complete examples:
example_guardrails.py- Compliance checking with guardrailsexample_hitl.py- Human approval workflowsexample_combined.py- Both guardrails and HITL together
Run examples:
export VELATIR_API_KEY="your-api-key"
export OPENAI_API_KEY="your-openai-key"
python examples/example_guardrails.py
Architecture
Middleware Integration
flowchart TD
A[1. User Input] --> B[2. VelatirHITLMiddleware<br/>modify_model_request hook<br/>→ Request human approval<br/>→ Poll for decision]
B --> C[3. Tool Execution<br/>if approved]
C --> D[4. Agent Response<br/>Generation]
D --> E[5. VelatirGuardrailMiddleware<br/>after_agent hook<br/>→ Validate against policies<br/>→ Block if violations found]
E --> F[6. Final Response<br/>to User]
style A fill:#e1f5ff
style B fill:#fff4e1
style C fill:#e8f5e9
style D fill:#e8f5e9
style E fill:#fff4e1
style F fill:#e1f5ff
Velatir Integration Flow
graph LR
A[LangChain<br/>Middleware] -->|HTTP| B[Velatir<br/>API Server]
B --> C[Policy Engine<br/>• GDPR<br/>• EU AI Act<br/>• Bias & Fairness]
B --> D[Approval Channels<br/>• Slack<br/>• Teams<br/>• Email<br/>• Web]
D --> E[Human Decision<br/>Approve/Deny]
E -->|Decision| B
C -->|Evaluation| F[Decision Logged<br/>in Velatir]
style A fill:#e1f5ff
style B fill:#fff4e1
style C fill:#ffe8e8
style D fill:#e8f5e9
style E fill:#f3e5f5
style F fill:#fff9c4
Best Practices
1. Layer Your Protections
# HITL for high-risk actions (before execution)
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["delete_data", "execute_payment", "modify_user"],
)
# Guardrails for compliance (after generation)
# Policies are configured in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Apply both
agent = create_react_agent(model, tools, middleware=[hitl, guardrails])
2. Use Appropriate Modes
# Production: Block violations
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking", # Strict enforcement
)
# Development: Log for analysis
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="logging", # Monitor without blocking
)
3. Configure Timeouts Appropriately
# Quick operations: Short timeout
hitl = VelatirHITLMiddleware(
api_key=api_key,
timeout=300.0, # 5 minutes
polling_interval=3.0,
)
# Critical decisions: Longer timeout
hitl = VelatirHITLMiddleware(
api_key=api_key,
timeout=1800.0, # 30 minutes
polling_interval=10.0,
)
4. Selective Tool Approval
# Only require approval for sensitive tools
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=[
"delete_user",
"process_payment",
"access_confidential_data",
],
)
# Other tools execute without approval
Use Cases
Financial Services
# Configure EU AI Act and Bias policies in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Configure approval flows for financial operations in dashboard
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["execute_trade", "approve_loan", "process_withdrawal"],
)
Healthcare
# Configure GDPR, Bias, and custom HIPAA policies in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Configure approval flows for medical operations in dashboard
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["access_patient_records", "prescribe_medication"],
)
Customer Support
# Configure Bias and Prompt Injection policies in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Configure approval flows for customer actions in dashboard
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["issue_refund", "close_account", "escalate_complaint"],
)
Velatir Dashboard
All review tasks, policy violations, and approvals are logged in your Velatir dashboard:
- Real-time monitoring of agent decisions
- Audit trail for compliance reporting
- Analytics on approval patterns and policy violations
- Team management for approval workflows
- Custom policies tailored to your industry
Visit velatir.com to set up your dashboard.
Development
Running Tests
pip install -e ".[dev]"
pytest tests/
Code Formatting
black langchain_velatir/
ruff check langchain_velatir/
Type Checking
mypy langchain_velatir/
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Support
License
MIT License - see LICENSE file for details.
Made with ❤️ by Velatir | Enabling safe AI adoption at scale
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_velatir-0.0.3.tar.gz.
File metadata
- Download URL: langchain_velatir-0.0.3.tar.gz
- Upload date:
- Size: 21.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2a3c5232c22a809c0e54cbeff651adec5d68631f1391d481c406fa4987c95f5
|
|
| MD5 |
72202775d62017a9c8752cc80a3acf74
|
|
| BLAKE2b-256 |
7a6abb250f923f3012af74e61b9f126102cd24cfd0f2b03287c4307939440cdd
|
Provenance
The following attestation bundles were made for langchain_velatir-0.0.3.tar.gz:
Publisher:
publish.yml on velatir/langchain-velatir
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langchain_velatir-0.0.3.tar.gz -
Subject digest:
e2a3c5232c22a809c0e54cbeff651adec5d68631f1391d481c406fa4987c95f5 - Sigstore transparency entry: 622058838
- Sigstore integration time:
-
Permalink:
velatir/langchain-velatir@6c9d31e4b510572f994fa4c38ba6efed0da17261 -
Branch / Tag:
refs/tags/0.0.3 - Owner: https://github.com/velatir
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@6c9d31e4b510572f994fa4c38ba6efed0da17261 -
Trigger Event:
release
-
Statement type:
File details
Details for the file langchain_velatir-0.0.3-py3-none-any.whl.
File metadata
- Download URL: langchain_velatir-0.0.3-py3-none-any.whl
- Upload date:
- Size: 13.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ac6c76362da9028b833c88713f88d8633041c475917d730ac3136cfc23bb6fdd
|
|
| MD5 |
ba33513be4915d89de92371716f75957
|
|
| BLAKE2b-256 |
63be97edbaad951d4c14113eb04ec84beee4b6b0798d55c3371fc9fd84f95e70
|
Provenance
The following attestation bundles were made for langchain_velatir-0.0.3-py3-none-any.whl:
Publisher:
publish.yml on velatir/langchain-velatir
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langchain_velatir-0.0.3-py3-none-any.whl -
Subject digest:
ac6c76362da9028b833c88713f88d8633041c475917d730ac3136cfc23bb6fdd - Sigstore transparency entry: 622058839
- Sigstore integration time:
-
Permalink:
velatir/langchain-velatir@6c9d31e4b510572f994fa4c38ba6efed0da17261 -
Branch / Tag:
refs/tags/0.0.3 - Owner: https://github.com/velatir
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@6c9d31e4b510572f994fa4c38ba6efed0da17261 -
Trigger Event:
release
-
Statement type: