LangChain integration for TrustAgents - protected document loaders, retrievers, and threat scanning
Project description
Agent Trust LangChain Integration
LangChain integration for TrustAgents - protect your LangChain applications from prompt injection and malicious content.
Installation
pip install agent-trust-langchain
Features
- TrustGuardLoader - Wrap any document loader with threat scanning
- TrustGuardRetriever - Protect RAG pipelines from poisoned documents
- TrustGuardCallback - Scan inputs/outputs during chain execution
- TrustVerificationCallback - Automatic message scanning
- AgentTrustTool - Let agents verify other agents
Quick Start
Protected Document Loader
Scan documents for threats before processing:
from langchain_community.document_loaders import WebBaseLoader
from agent_trust_langchain import TrustGuardLoader
# Wrap any loader with threat protection
base_loader = WebBaseLoader(["https://example.com/docs"])
loader = TrustGuardLoader(
base_loader,
api_key="ta_xxx...",
on_threat="filter", # Skip documents with threats
)
# Only returns safe documents
docs = loader.load()
Protected RAG Retriever
Protect your RAG pipeline from poisoned documents:
from langchain_community.vectorstores import Chroma
from agent_trust_langchain import TrustGuardRetriever
# Wrap your retriever
base_retriever = vectorstore.as_retriever()
retriever = TrustGuardRetriever(
retriever=base_retriever,
api_key="ta_xxx...",
on_threat="filter", # Filter out threats
)
# Build a protected RAG chain
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
"Answer based on context:\n{context}\n\nQuestion: {question}"
)
llm = ChatOpenAI()
chain = (
{"context": retriever, "question": lambda x: x}
| prompt
| llm
)
# Poisoned documents are automatically filtered
response = chain.invoke("What is the company policy?")
Threat Scanning Callback
Scan all inputs and outputs during chain execution:
from langchain_openai import ChatOpenAI
from agent_trust_langchain import TrustGuardCallback
callback = TrustGuardCallback(
api_key="ta_xxx...",
block_on_threat=True, # Raise exception on threat
scan_type="web", # Optimized for web content
)
llm = ChatOpenAI(callbacks=[callback])
try:
response = llm.invoke("Ignore previous instructions...")
except ThreatInDocumentError as e:
print(f"Blocked: {e.guard_result.reasoning}")
API Reference
TrustGuardLoader
Wraps any LangChain document loader with threat scanning.
TrustGuardLoader(
loader: BaseLoader, # The loader to wrap
api_key: str = None, # TrustGuard API key
on_threat: str = "warn", # "block", "warn", "filter", "tag"
min_block_level: ThreatLevel = ThreatLevel.HIGH,
content_type: ContentSource = ContentSource.DOCUMENT,
)
on_threat options:
"block"- RaiseThreatInDocumentErroron threat"warn"- Log warning and continue"filter"- Skip documents with threats"tag"- Add threat info to document metadata
TrustGuardRetriever
Wraps any retriever to scan retrieved documents.
TrustGuardRetriever(
retriever: BaseRetriever, # The retriever to wrap
api_key: str = None, # TrustGuard API key
on_threat: str = "warn", # Same options as loader
min_block_level: ThreatLevel = ThreatLevel.HIGH,
)
TrustGuardCallback
Callback handler for scanning during chain execution.
TrustGuardCallback(
api_key: str = None,
block_on_threat: bool = False,
scan_inputs: bool = True,
scan_outputs: bool = False,
scan_type: str = "text", # "text", "web", "document", "memory"
on_threat_detected: callable = None,
)
TrustVerificationCallback
Callback for basic message scanning (uses scan_text API).
TrustVerificationCallback(
block_on_threat: bool = False,
min_block_level: ThreatLevel = ThreatLevel.HIGH,
scan_human_messages: bool = True,
scan_ai_messages: bool = False,
)
Examples
Protected Web Research Agent
from langchain_community.document_loaders import WebBaseLoader
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from agent_trust_langchain import TrustGuardLoader, TrustGuardCallback
# Protected loader - filters malicious web pages
loader = TrustGuardLoader(
WebBaseLoader(urls),
api_key="ta_xxx...",
on_threat="filter",
content_type=ContentSource.WEB,
)
# Protected LLM - scans inputs
callback = TrustGuardCallback(
api_key="ta_xxx...",
block_on_threat=True,
scan_type="web",
)
llm = ChatOpenAI(callbacks=[callback])
# Safe web research
docs = loader.load()
response = llm.invoke(f"Summarize: {docs[0].page_content}")
Protected RAG Pipeline
from langchain_community.vectorstores import Chroma
from langchain_community.embeddings import OpenAIEmbeddings
from agent_trust_langchain import TrustGuardLoader, TrustGuardRetriever
# Scan documents before indexing
loader = TrustGuardLoader(
DirectoryLoader("./documents"),
api_key="ta_xxx...",
on_threat="filter",
)
safe_docs = loader.load()
# Create vectorstore with only safe documents
vectorstore = Chroma.from_documents(safe_docs, OpenAIEmbeddings())
# Protect retrieval too (in case of dynamic additions)
retriever = TrustGuardRetriever(
retriever=vectorstore.as_retriever(),
api_key="ta_xxx...",
on_threat="filter",
)
# Build chain
chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(),
retriever=retriever,
)
Error Handling
from agent_trust_langchain import ThreatInDocumentError
try:
docs = loader.load()
except ThreatInDocumentError as e:
print(f"Threat in: {e.document_source}")
print(f"Verdict: {e.guard_result.verdict}")
print(f"Threats: {[t.pattern_name for t in e.guard_result.threats]}")
License
MIT License
Links
- TrustAgents: https://trustagents.dev
- Documentation: https://trustagents.dev/docs
- GitHub: https://github.com/jd-delatorre/trustlayer
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_trust_langchain-0.2.0.tar.gz.
File metadata
- Download URL: agent_trust_langchain-0.2.0.tar.gz
- Upload date:
- Size: 14.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b77849077cdd297d135ce5e373b0e49883505bd71fbbe4593fdd59c16f6f9778
|
|
| MD5 |
dbd697e11398cabfcab64e8a40d1a509
|
|
| BLAKE2b-256 |
cc860b3273cbef0a4eac37b462eb96087ebfd920f579ea7e9d5b71f7ca51b2c6
|
File details
Details for the file agent_trust_langchain-0.2.0-py3-none-any.whl.
File metadata
- Download URL: agent_trust_langchain-0.2.0-py3-none-any.whl
- Upload date:
- Size: 15.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8421f2653889f2a44bf2be9788a89989ba9d9000efecdffd09475437267bed8a
|
|
| MD5 |
f1293a368b9f7198c425961da48328a7
|
|
| BLAKE2b-256 |
6111889f482c4ca2e00893ec170f6091b7b54a8ccc2104784c5e071de07f8ea5
|