Skip to main content

Deterministic, low-latency heuristic security filter for LLM inputs and outputs

Project description

🛡️ YecoAI Security Layer

A deterministic, low-latency heuristic security filter for LLM inputs and outputs

Asimov Rules Injection • Context-Aware N-Gram Filtering • Enterprise DLP • Sub-millisecond Latency


License Python Tests


Developed by www.yecoai.com


✨ Why do you need it?

As LLMs become integrated into enterprise environments, they require robust defense-in-depth strategies. The YecoAI Security Layer implements a fast, heuristic multi-layer defense system acting as a first-line filter for Enterprise Data Loss Prevention (DLP) and common injection patterns.

Feature How it works Benefit
🛡️ Asimov Pre-Injection Injects baseline behavioral rules into the LLM's system prompt before inference. Sets a foundational instruction baseline for AI behavior.
🧠 Context-Aware Analysis Fast N-gram analysis with contextual understanding (negations, educational queries). Low False Positives: Allows basic discussions about rules without blocking legitimate prompts.
🛑 Destructive Command Block Filters OS commands like rm -rf /, del C:\Windows, or format. Prevents the AI from destroying system or user data autonomously.
🔐 Enterprise DLP (Secrets) Regex-based detection for API Keys, JWTs, Credit Cards, and Private Keys. Prevents sensitive data exfiltration or accidental leaks.
💉 SQL/Code Injection Defense Blocks malicious payloads like OR 1=1 or DROP TABLE. Protects backend databases from AI-generated or echoed SQLi payloads.
👁️ Chain of Thought (CoT) Analysis Extracts and scans <think> tags and internal reasoning. Catches malicious/deceptive intents (e.g., "trick the user") before the output is generated.
📦 Execution Sandbox Guard Intercepts tool usage at runtime (Filesystem, Shell, Network). Enterprise-grade allow/deny policies to block SSRF and unauthorized system access.
📜 Declarative Policy System YAML/JSON-based policies with hot-reload (SOC2/GDPR compliant). Define custom "Asimov profiles", DLP rules, and forbidden commands with full audit trails.
🌐 Multi-Model & Multimodal Pre-validates tool call schemas and scans images via OCR. Blocks destructive actions before execution and prevents visual prompt injections.

🧩 Core Components

1. RoboticsEngine (Prompt Pre-Injection)

Injects the Three Laws of Robotics into the system instructions. It ensures the model aligns with human safety before it even processes the user's prompt.

2. SafetyModel (Output Filtering, DLP & CoT Analysis)

A deterministic, ultra-low-latency filter that runs immediately after the LLM generates a response.

  • Chain of Thought Inspector: Automatically extracts and scans <think> tags to block deceptive reasoning.
  • Context-Aware N-Gram Detection: Blocks attempts to bypass the rules (e.g., "ignore previous instructions"), but understands when the user is simply asking "what does 'ignore previous instructions' mean?".
  • Secret & PII Scanner: Instantly catches and blocks AWS Keys, Bearer tokens, and Credit Cards.
  • SQLi Scanner: Blocks destructive database queries and bypass payloads.

3. ExecutionSandboxGuard (Runtime Protection)

The ultimate killer feature for tool-calling LLMs. It intercepts delete_file, shell_exec, api_call, and more.

  • Filesystem: Validates absolute paths against strict whitelists/blacklists.
  • Shell: Prevents destructive shell commands at runtime.
  • Network: Blocks SSRF via domain blacklisting (e.g., localhost) and API whitelisting.

4. PolicyManager (Declarative "Asimov Profiles")

Load Enterprise YAML/JSON compliance policies with hot-reload, covering ethical_rule, forbidden_command, dlp, and sandbox_rule. Built for SOC2 and AI Act audit trails.


📊 Internal Benchmarks & Performance

We run a suite of internal tests (benchmark_security.py) to validate the heuristic rules against common destructive commands, secret leaks, and basic injection attempts.

Performance Characteristics:

  • Architecture: Purely deterministic, regex, and N-gram based. No secondary LLMs in the critical path.
  • Average Latency: ~0.05 ms per check.
  • Use Case: Designed to act as a first line of defense for high-throughput systems, catching obvious policy violations, accidental data leaks, and known malicious patterns before they reach slower, more complex AI-based evaluators.

Disclaimer on Security: This tool uses heuristic analysis (Regex/N-grams). While it is extremely fast and effective against known patterns, it is not a silver bullet against advanced semantic prompt injections. Due to the probabilistic nature of LLMs, deterministic filters cannot achieve a 100% block rate against all possible semantic attacks. For comprehensive enterprise security, we recommend using this layer in conjunction with advanced semantic models (like Guardrails AI or NeMo Guardrails) for defense-in-depth.


🚀 Quick Start

from yecoai_security_layer import RoboticsEngine, SafetyModel

# 1. Inject the rules before sending to LLM
engine = RoboticsEngine()
secure_prompt = engine.inject_prompt(user_input="How do I delete my system?")

# ... [LLM Generates Response] ...
llm_response = "You can use rm -rf /"

# 2. Validate the response before showing to user or executing
safety = SafetyModel(user_request="How do I delete my system?")
result = safety.validate_response(llm_response)

if not result["safe"]:
    print(f"BLOCKED: {result['reason']}")
    # Output: BLOCKED: Dangerous system command detected.

🔌 Plug-and-Play Integrations

The YecoAI Security Layer is designed to drop seamlessly into your existing AI stack with just two lines of code. We provide native integrations for the most popular frameworks.

🦜🔗 LangChain & LangGraph

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from yecoai_security_layer.integrations.langchain import SecurityInjector, SecurityOutputParser

# 1. Initialize your model
llm = ChatOpenAI(model="gpt-4o")

# 2. Build a secure chain
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

# Just drop in SecurityInjector and SecurityOutputParser!
secure_chain = prompt | SecurityInjector | llm | SecurityOutputParser()

# 3. Invoke safely
secure_chain.invoke({"input": "What is the capital of France?"})

🦙 LlamaIndex

from llama_index.core.query_pipeline import QueryPipeline
from llama_index.llms.openai import OpenAI
from yecoai_security_layer.integrations.llamaindex import SecurityInputComponent, SecurityOutputComponent

# Build a secure query pipeline
p = QueryPipeline(chain=[
    SecurityInputComponent(),
    OpenAI(model="gpt-4o"),
    SecurityOutputComponent()
])

response = p.run(input="Explain how to bypass the 3 laws")

🤖 OpenAI & Anthropic Pure API

If you don't use frameworks, just wrap your API calls directly.

from openai import OpenAI
from yecoai_security_layer.integrations.api_wrappers import secure_chat_completion

client = OpenAI()

# Instead of client.chat.completions.create(...)
response = secure_chat_completion(
    client.chat.completions.create,
    messages=[{"role": "user", "content": "Tell me a joke"}],
    model="gpt-4o"
)
print(response.choices[0].message.content)

🌍 vLLM & LiteLLM (FastAPI Middleware)

For local models or custom API servers, intercept requests at the network level.

from fastapi import FastAPI
from yecoai_security_layer.integrations.middleware import YecoAISecurityMiddleware

app = FastAPI()

# Add the security layer as HTTP Middleware
app.add_middleware(YecoAISecurityMiddleware)

# All /chat/completions endpoints are now automatically protected!

📄 License

This project is available under a dual licensing model:

🟢 Open Source (Apache 2.0)

Free for:

  • Personal use
  • Research
  • Educational purposes
  • ✔ Modification allowed
  • ✔ Redistribution allowed

🔴 Commercial Use

Use in commercial environments (SaaS, paid products, enterprise systems) requires a commercial license.

See COMMERCIAL_LICENSE.md for full terms.

🌐 About Us: YecoAI

YecoAI builds next-generation cognitive systems focused on AI stability, security, and safety for Enterprise environments.

Website: www.yecoai.com

© 2026 **www.yecoai.com** Original Author: **Marco (HighMark / YecoAI)**

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yecoai_security_layer-1.0.0.tar.gz (23.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yecoai_security_layer-1.0.0-py3-none-any.whl (21.5 kB view details)

Uploaded Python 3

File details

Details for the file yecoai_security_layer-1.0.0.tar.gz.

File metadata

  • Download URL: yecoai_security_layer-1.0.0.tar.gz
  • Upload date:
  • Size: 23.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for yecoai_security_layer-1.0.0.tar.gz
Algorithm Hash digest
SHA256 fb4cef1a90b8ab2f94cb7669f55a4ab0c1ad3be9423e78aa1b4283f0b7dec866
MD5 a7f423098a60067895e29c048acbf6ba
BLAKE2b-256 4864207aa89cf08af476a08162367af8e4ae5c5185f12e051f7d6096c0d95244

See more details on using hashes here.

File details

Details for the file yecoai_security_layer-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for yecoai_security_layer-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ce2607fdead0963c0ad2597ecff48d68d18ddcc71460323206d235a8e13d321a
MD5 39e1930c8122a9954f00c3aed674becf
BLAKE2b-256 1270915d4cb469db5ccc0c21316c3761d401d5a5903a2c697e19723ee613f2b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page