Secure your LangChain agents with per-agent identity, policy enforcement, and tamper-proof audit logs.
Project description
langchain-ai-identity
Secure your LangChain agents with per-agent identity, policy enforcement, and tamper-proof audit logs — in 5 lines of code.
AI Identity is the identity layer for AI agents — Okta for AI. Every agent gets a verifiable UUID identity, a cryptographic API key, policy-based guardrails, and an HMAC-chained audit log. This package makes it trivial to wire that into any LangChain agent.
Installation
pip install langchain-ai-identity
For CrewAI support:
pip install "langchain-ai-identity[crewai]"
Quick Start
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_ai_identity import create_ai_identity_agent
agent = create_ai_identity_agent(
tools=[DuckDuckGoSearchRun()],
agent_id="your-agent-uuid", # from AI Identity dashboard
ai_identity_api_key="aid_sk_...", # show-once key from agent creation
openai_api_key="sk-...",
)
result = agent.invoke({"input": "What is the latest news about AI agent security?"})
print(result["output"])
That's it. Every tool call is policy-enforced. Every LLM call is audited. Nothing else to configure.
How It Works
Your Agent
│
├── LLM call (ChatOpenAI)
│ │
│ ▼
│ AIIdentityChatOpenAI
│ │
│ ├── POST /gateway/enforce → AI Identity Gateway ──► OpenAI (if allowed)
│ └── Audit log → AI Identity API
│
└── Tool call (search, calculator, …)
│
▼
AIIdentityToolkit
│
├── POST /gateway/enforce → AI Identity Gateway ──► Tool._run() (if allowed)
└── Audit log → AI Identity API
Every request to an LLM or tool is pre-checked against the agent's policy before executing. If the policy denies it, the call is blocked and the denial is logged. All events — whether allowed or denied — are appended to a tamper-proof HMAC-chained audit log.
What Gets Enforced
Policy rules
Policies live in AI Identity and control which endpoints and methods an agent is allowed to call. Rules can be as broad or fine-grained as you need:
# Creating a policy via the AI Identity SDK (ai-identity package)
from ai_identity import SyncAIIdentityClient
client = SyncAIIdentityClient(api_key="your-dev-api-key")
client.policies.create(
agent_id="your-agent-uuid",
rules=[
{"endpoint": "/v1/chat/completions", "method": "POST", "effect": "allow"},
{"endpoint": "/tools/search", "method": "POST", "effect": "allow"},
{"endpoint": "/tools/send_email", "method": "POST", "effect": "deny"},
],
)
Key scoping
- Runtime keys (
aid_sk_...) — used by agents at runtime. Rejected on management endpoints. - Admin keys (
aid_admin_...) — used for API management. Rejected on proxy/tool endpoints.
The gateway enforces key type separation automatically. A compromised runtime key cannot be used to create new agents or rotate credentials.
Fail modes
agent = create_ai_identity_agent(
...,
fail_closed=True, # default — gateway error or denial raises an exception
fail_closed=False, # fail-open — gateway error logs a warning and continues
)
Drop-in Replacement
Swap ChatOpenAI for AIIdentityChatOpenAI in any existing LangChain chain:
# Before
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", openai_api_key="sk-...")
# After — adds gateway enforcement and automatic audit logging
from langchain_ai_identity import AIIdentityChatOpenAI
llm = AIIdentityChatOpenAI(
model="gpt-4o",
openai_api_key="sk-...",
agent_id="your-agent-uuid",
ai_identity_api_key="aid_sk_...",
)
All existing LangChain chains, LCEL expressions, and agents work unchanged.
Attach the Callback to Any Chain
If you already have a chain and just want audit logging (without gateway enforcement), attach the callback handler:
from langchain_ai_identity import AIIdentityCallbackHandler
handler = AIIdentityCallbackHandler(
agent_id="your-agent-uuid",
api_key="aid_sk_...",
fail_closed=False, # log warnings, never crash the chain
)
# Attach to any LangChain object that accepts callbacks
chain = some_existing_chain.with_config(callbacks=[handler])
Wrap Tools with Policy Enforcement
Use AIIdentityToolkit to add enforcement to any list of tools, independent of the LLM:
from langchain_community.tools import DuckDuckGoSearchRun, WikipediaQueryRun
from langchain_ai_identity import AIIdentityToolkit
toolkit = AIIdentityToolkit(
tools=[DuckDuckGoSearchRun(), WikipediaQueryRun()],
agent_id="your-agent-uuid",
api_key="aid_sk_...",
)
# Pre-flight check — see what's allowed before you run
for tool_name in ["duckduckgo_search", "wikipedia"]:
result = toolkit.check_tool_access(tool_name)
print(tool_name, "→", result["decision"])
# Get the wrapped tools to pass to your agent
safe_tools = toolkit.get_tools()
Audit Logs
Every event is logged to an append-only, HMAC-chained audit log. Query it programmatically:
import httpx
from datetime import datetime, timedelta, timezone
start = (datetime.now(tz=timezone.utc) - timedelta(hours=1)).isoformat()
with httpx.Client() as client:
resp = client.get(
"https://ai-identity-api.onrender.com/api/v1/audit",
params={"agent_id": "your-agent-uuid", "start_date": start},
headers={"X-API-Key": "aid_sk_..."},
)
entries = resp.json()["items"]
for entry in entries:
print(entry["event_type"], "→", entry["decision"])
Verify chain integrity
with httpx.Client() as client:
resp = client.get(
"https://ai-identity-api.onrender.com/api/v1/audit/verify",
params={"agent_id": "your-agent-uuid"},
headers={"X-API-Key": "aid_sk_..."},
)
data = resp.json()
print("Chain valid:", data["valid"])
print("Entries checked:", data["entries_checked"])
If valid is False, the response includes the exact position of the first hash break — useful for incident investigation.
Forensics report (SOC 2)
with httpx.Client() as client:
resp = client.get(
"https://ai-identity-api.onrender.com/api/v1/audit/report",
params={
"agent_id": "your-agent-uuid",
"start_date": "2025-01-01T00:00:00Z",
"end_date": "2025-12-31T23:59:59Z",
"format": "json",
},
headers={"X-API-Key": "aid_sk_..."},
)
report = resp.json()
print("Chain of custody valid:", report["chain_of_custody"]["valid"])
CrewAI Integration
CrewAI tools are LangChain-compatible — wrap them with AIIdentityToolkit before passing to Agent:
from crewai import Agent, Crew, Task, Process
from crewai_tools import SerperDevTool
from langchain_openai import ChatOpenAI
from langchain_ai_identity import AIIdentityCallbackHandler, AIIdentityToolkit
toolkit = AIIdentityToolkit(
tools=[SerperDevTool()],
agent_id="your-agent-uuid",
api_key="aid_sk_...",
)
researcher = Agent(
role="Researcher",
goal="Find information on AI agent identity.",
backstory="Expert in AI security.",
tools=toolkit.get_tools(), # enforced tools
llm=ChatOpenAI(
model="gpt-4o",
callbacks=[AIIdentityCallbackHandler(
agent_id="your-agent-uuid",
api_key="aid_sk_...",
)],
),
)
See examples/crewai_integration.py for the full example.
API Reference
| Class / Function | Description |
|---|---|
create_ai_identity_agent() |
Factory: create a fully wired AgentExecutor in one call |
AIIdentityChatOpenAI |
Drop-in ChatOpenAI with gateway enforcement + auto audit callback |
AIIdentityToolkit |
Wraps tool lists with per-call gateway enforcement |
AIIdentityCallbackHandler |
Sync LangChain callback that logs to the AI Identity audit API |
AIIdentityAsyncCallbackHandler |
Async version of the callback handler |
Configuration Reference
create_ai_identity_agent()
| Parameter | Type | Default | Description |
|---|---|---|---|
tools |
List[BaseTool] |
required | LangChain tools to secure |
agent_id |
str |
required | Agent UUID from AI Identity |
ai_identity_api_key |
str |
required | aid_sk_... runtime key |
openai_api_key |
str |
required | OpenAI API key |
model |
str |
"gpt-4o" |
OpenAI model name |
fail_closed |
bool |
True |
Raise on denial/error vs. warn-and-continue |
ai_identity_timeout |
float |
5.0 |
Gateway call timeout in seconds |
verbose |
bool |
False |
Print agent reasoning steps |
max_iterations |
int |
10 |
Max agent reasoning steps |
Links
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_ai_identity-0.1.0.tar.gz.
File metadata
- Download URL: langchain_ai_identity-0.1.0.tar.gz
- Upload date:
- Size: 16.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c48303772723775af06f0cf2fb7a4733d8cde003149c46f703ed19ec95154f1a
|
|
| MD5 |
54e9ecaeba4ac20ab5479ec7d0bbb214
|
|
| BLAKE2b-256 |
b4b9496b251b2bac33dd088c524fec8c095350b5b26ff0e7f8c60efb090faaa7
|
Provenance
The following attestation bundles were made for langchain_ai_identity-0.1.0.tar.gz:
Publisher:
publish-langchain-sdk.yml on Levaj2000/AI-Identity
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langchain_ai_identity-0.1.0.tar.gz -
Subject digest:
c48303772723775af06f0cf2fb7a4733d8cde003149c46f703ed19ec95154f1a - Sigstore transparency entry: 1208497460
- Sigstore integration time:
-
Permalink:
Levaj2000/AI-Identity@87d5e3e812eda89c43a80587ae031b09bd9f10df -
Branch / Tag:
refs/tags/langchain-v0.1.0 - Owner: https://github.com/Levaj2000
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-langchain-sdk.yml@87d5e3e812eda89c43a80587ae031b09bd9f10df -
Trigger Event:
push
-
Statement type:
File details
Details for the file langchain_ai_identity-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langchain_ai_identity-0.1.0-py3-none-any.whl
- Upload date:
- Size: 16.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1ce049ed5e1a444a6b1e27c5a82bc13833ef25ce4db64c00c88dbef7de012173
|
|
| MD5 |
f1804d69e54cedf588f715a53aa89742
|
|
| BLAKE2b-256 |
40e964db297ca0ccbf5444d4b1a16c6fde34e70a918654f544e15c00128822c1
|
Provenance
The following attestation bundles were made for langchain_ai_identity-0.1.0-py3-none-any.whl:
Publisher:
publish-langchain-sdk.yml on Levaj2000/AI-Identity
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langchain_ai_identity-0.1.0-py3-none-any.whl -
Subject digest:
1ce049ed5e1a444a6b1e27c5a82bc13833ef25ce4db64c00c88dbef7de012173 - Sigstore transparency entry: 1208497532
- Sigstore integration time:
-
Permalink:
Levaj2000/AI-Identity@87d5e3e812eda89c43a80587ae031b09bd9f10df -
Branch / Tag:
refs/tags/langchain-v0.1.0 - Owner: https://github.com/Levaj2000
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-langchain-sdk.yml@87d5e3e812eda89c43a80587ae031b09bd9f10df -
Trigger Event:
push
-
Statement type: