AI Security Scanner for Compound Attack Chain Detection
Project description
TESSERA
AI Security Scanner for Compound Attack Chain Detection
Version: 2.0.0
Date: April 2026
What
TESSERA detects compound attack chains in AI/Agent systems using CFPE Rules - rule-based detection of known vulnerability patterns. It also supports optional LLM-powered analysis for semantic vulnerability detection.
The primary production surface is the public Python package and CLI. The FastAPI service is supported as a secondary deployment target for container-on-VM environments behind a reverse proxy.
Features
- 10 CFPE Detection Patterns - Comprehensive coverage of AI agent vulnerabilities
- Multiple Output Formats - Text, JSON, SARIF, HTML
- LLM Integration - Optional AI-powered analysis (OpenAI, Anthropic, Ollama)
- CI/CD Ready - GitHub Actions, pre-commit hooks
- MCP Server - Model Context Protocol support
Install
pip install tessera-security
From source:
git clone https://github.com/Devaretanmay/TESSERA.git
cd TESSERA
pip install -e .
Support Matrix
- Supported Python versions:
3.10,3.11,3.12 - Operational default:
3.11 - Package + CLI are the primary supported production surface
- FastAPI is a secondary production surface
Quick Start
CLI
# Scan a topology
tessera scan --config my_agent.yaml
# Output formats: text (default), json, sarif, html
tessera scan --config my_agent.yaml --format sarif
# List all detection rules
tessera list-rules
# Explain a specific rule
tessera explain CFPE-0001
Python API
from tessera import Tesseract, OutputFormat
# Simple usage
scanner = Tesseract()
result = scanner.scan("my_agent.yaml", OutputFormat.TEXT)
# JSON output
result = scanner.scan("my_agent.yaml", OutputFormat.JSON)
print(f"Found {result['summary']['total']} vulnerabilities")
# HTML report
result = scanner.scan("my_agent.yaml", OutputFormat.HTML)
with open("report.html", "w") as f:
f.write(result)
# With LLM analysis (requires API key)
scanner.enable_llm({"provider": "openai"})
result = scanner.scan("my_agent.yaml", OutputFormat.JSON, llm_enabled=True)
Programmatic Topology
from tessera import Graph, Node, Edge, TrustBoundary, DataFlow, detect
graph = Graph(
system="my_agent",
nodes={
"user": Node(id="user", type="user", trust_boundary=TrustBoundary.EXTERNAL),
"llm": Node(id="llm", type="llm", trust_boundary=TrustBoundary.INTERNAL),
"tool": Node(id="tool", type="tool", trust_boundary=TrustBoundary.INTERNAL),
},
edges=[
Edge(from_node="user", to_node="llm", data_flow=DataFlow.API, trust_boundary=TrustBoundary.EXTERNAL),
Edge(from_node="llm", to_node="tool", data_flow=DataFlow.TOOL_CALL, trust_boundary=TrustBoundary.INTERNAL),
]
)
findings = detect(graph)
for finding in findings:
print(f"[{finding.severity.value.upper()}] {finding.id}: {finding.description}")
Output Formats
Text (CLI default)
TESSERA Security Scan
========================================
System: my_agent
Version: 1.0
Graph: 3 nodes, 2 edges
Scan time: 0.05ms
Summary:
HIGH: 1
Findings:
1. [HIGH] CFPE-0001
RAG to Tool execution chain detected
Remediation:
1. Validate RAG outputs before tool execution
...
JSON
{
"tessera_version": "2.0.0",
"scan": {
"system": "my_agent",
"version": "1.0",
"scan_time_ms": 0.05,
"graph": {"nodes": 3, "edges": 2}
},
"findings": [...],
"summary": {"total": 1, "by_severity": {"critical": 0, "high": 1}}
}
SARIF (GitHub Code Scanning)
tessera scan --config my_agent.yaml --format sarif --output results.sarif
Results appear in GitHub Security tab under "Code Scanning".
HTML
Generate beautiful HTML reports:
tessera scan --config my_agent.yaml --format html --output report.html
CI/CD Integration
GitHub Actions
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install -e ".[dev,api]"
- run: python -m ruff check src tests
- run: python -m pytest -q
- run: python -m build
- run: python -m twine check dist/*
Pre-commit Hook
Add to .pre-commit-config.yaml:
repos:
- repo: https://github.com/Devaretanmay/TESSERA
rev: v2.0.0
hooks:
- id: tessera-scan
LLM Analysis (Optional)
TESSERA supports optional LLM-powered analysis for deeper semantic understanding:
# OpenAI
scanner.enable_llm({"provider": "openai", "model": "gpt-4"})
# Anthropic
scanner.enable_llm({"provider": "anthropic", "model": "claude-3-opus"})
# Ollama (local)
scanner.enable_llm({"provider": "ollama", "model": "llama2"})
Set environment variables:
OPENAI_API_KEYANTHROPIC_API_KEY
CFPE Patterns (10 rules)
| ID | Pattern | Severity | Description |
|---|---|---|---|
| CFPE-0001 | RAG to Tool | HIGH | LLM → RAG → Tool chain |
| CFPE-0002 | Memory Poisoning | CRITICAL | Write to persistent memory |
| CFPE-0003 | External to Database | HIGH | Untrusted → database |
| CFPE-0004 | Trust Boundary Bypass | HIGH | Cross-boundary untrusted flow |
| CFPE-0005 | Multi-hop Attack Chain | HIGH | 3+ edge attack path |
| CFPE-0006 | Tool to Tool Chaining | MEDIUM | Tool calls tool |
| CFPE-0007 | Sensitive Data Exfiltration | CRITICAL | LLM → external service |
| CFPE-0008 | RAG Context Injection | HIGH | User → RAG injection |
| CFPE-0009 | MCP Config Attack | HIGH | Malicious MCP server |
| CFPE-0010 | Agent Skill Injection | HIGH | SKILL.md compromise |
Node Types
user- Human input sourcellm/model- Language modelapi- API gatewaytool- External tool/servicedatabase- Databasememory_store- Persistent memoryrag_corpus- Knowledge base (RAG)external_service- External API/servicemcp_server- MCP serverskill- Agent skill definition
Trust Boundaries
external → user_controlled → partially_trusted → internal → privileged
Architecture
src/tessera/
├── core/ # Domain logic
│ ├── topology/ # Graph models (Graph, Node, Edge)
│ ├── detection/ # CFPE rules (10 patterns)
│ └── findings/ # Finding models
├── engine/ # Scanner engine (Tesseract)
├── infra/
│ ├── output/ # Formatters (JSON, SARIF, Text, HTML)
│ ├── llm/ # LLM providers (OpenAI, Anthropic, Ollama)
│ └── mcp/ # MCP server
└── interfaces/
└── cli/ # CLI commands
Production Docs
- Deployment Guide
- Environment Reference
- Release Playbook
- Support Matrix
- Incident Runbook
- Security Policy
Testing
Run examples:
# All examples
tessera scan --config examples/*.yaml
# Specific example
tessera scan --config examples/complex_agent.yaml --format json
Environment Variables
| Variable | Description |
|---|---|
OPENAI_API_KEY |
OpenAI API key for LLM analysis |
ANTHROPIC_API_KEY |
Anthropic API key for LLM analysis |
License
MIT
GitHub
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tessera_security-2.0.1.tar.gz.
File metadata
- Download URL: tessera_security-2.0.1.tar.gz
- Upload date:
- Size: 50.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e740a9f124013bb8c1a6557c7951376bc6a25bde1259614e220ce1454c8a9c24
|
|
| MD5 |
6174e0c80c9ccb39991810b7137f8cac
|
|
| BLAKE2b-256 |
0dd34ad6b360edf58d66690443091f17b7638fc5edac68430c3222fcefdbde95
|
File details
Details for the file tessera_security-2.0.1-py3-none-any.whl.
File metadata
- Download URL: tessera_security-2.0.1-py3-none-any.whl
- Upload date:
- Size: 62.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4bdd44b6a7764f971d84b08642d1f9f490ee81696cd800384298e66c3ad574b2
|
|
| MD5 |
d0220c93fdf703974a6527ee46bf343d
|
|
| BLAKE2b-256 |
b9220ec66fb3a10b899f17cb239b98d3be62d2b0f89ecefed331aa040cdcbf7c
|