Skip to main content

LangChain middleware integration for OWASP Agent Memory Guard — runtime defense against AI agent memory poisoning (ASI06)

Project description

langchain-agent-memory-guard

PyPI License OWASP

LangChain middleware integration for OWASP Agent Memory Guard — runtime defense against AI agent memory poisoning attacks (OWASP ASI06).

Overview

This middleware protects LangChain agents by scanning model inputs, outputs, and tool results for:

  • Prompt injection — Detects injected instructions hidden in memory/context
  • Secret leakage — Catches API keys, tokens, and credentials in responses
  • Content anomalies — Flags abnormally large payloads that may indicate stuffing attacks
  • Protected key tampering — Prevents unauthorized modification of critical memory fields

Installation

pip install langchain-agent-memory-guard

Quick Start

from langchain_agent_memory_guard import MemoryGuardMiddleware
from langchain.agents import create_agent

# Basic usage with strict security policy (recommended)
agent = create_agent(
    "openai:gpt-4o",
    tools=[my_search_tool, my_db_tool],
    middleware=[MemoryGuardMiddleware()],
)

# The agent is now protected — any memory poisoning attempts
# in tool outputs or context will be detected and blocked
result = agent.invoke({"messages": [("user", "Search for recent news")]})

Configuration

Violation Handling Modes

# Block mode (default) — raises MemoryGuardViolation on detection
middleware = MemoryGuardMiddleware(on_violation="block")

# Warn mode — logs warning but allows execution to continue
middleware = MemoryGuardMiddleware(on_violation="warn")

# Strip mode — silently removes violating content
middleware = MemoryGuardMiddleware(on_violation="strip")

Custom Security Policy

from agent_memory_guard import Policy, PolicyRule

# Only check for injection and secrets (skip size checks)
policy = Policy(rules=[PolicyRule.NO_INJECTION, PolicyRule.NO_SECRETS])
middleware = MemoryGuardMiddleware(policy=policy)

# Full strict policy with custom protected keys
policy = Policy.strict(protected_keys=["user.api_key", "system.config"])
middleware = MemoryGuardMiddleware(policy=policy)

How It Works

The middleware hooks into three points in the LangChain agent loop:

Hook What It Scans Threat Mitigated
before_model Messages in agent state Injection in memory/context
after_model Model response content Secret leakage, injection propagation
wrap_tool_call Tool output content Injection via tool results (primary attack vector)

Why Tool Output Scanning Matters

Tool outputs are the primary vector for memory poisoning. An attacker can embed prompt injection payloads in:

  • Web pages fetched by a search tool
  • Database records returned by a query tool
  • API responses from external services

This middleware catches these attacks before they can influence the agent's behavior.

Error Handling

from langchain_agent_memory_guard import MemoryGuardMiddleware
from langchain_agent_memory_guard.middleware import MemoryGuardViolation

middleware = MemoryGuardMiddleware(on_violation="block")

try:
    result = agent.invoke({"messages": [("user", "Process this data")]})
except MemoryGuardViolation as e:
    print(f"Attack detected: {e}")
    # Handle the violation (alert, log, fallback response, etc.)

Metrics

middleware = MemoryGuardMiddleware()
# ... after running the agent ...
print(f"Total violations detected: {middleware.violation_count}")

Related

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_agent_memory_guard-0.1.0.tar.gz (6.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_agent_memory_guard-0.1.0-py3-none-any.whl (6.6 kB view details)

Uploaded Python 3

File details

Details for the file langchain_agent_memory_guard-0.1.0.tar.gz.

File metadata

File hashes

Hashes for langchain_agent_memory_guard-0.1.0.tar.gz
Algorithm Hash digest
SHA256 64d3944954a1416d50c3fcc447bb88968f799100ef2e88b56ee6567d3131b42e
MD5 0093a53db4c154b3162104e00d538e73
BLAKE2b-256 37c69febfd778bd0ba85dea2f4e768f67ad695542dc619216f0c2b8379f64de9

See more details on using hashes here.

File details

Details for the file langchain_agent_memory_guard-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_agent_memory_guard-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6761af54bc82909075fe13e308336da58e793b5ba89c57777aacb547c5afe083
MD5 35646064b6f8fe6a1a0bf77ebdb8f97d
BLAKE2b-256 e069a420d388c4822f90f22f2ed6744bc997d40c1100fb8aa1cf54f00692ad18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page