Skip to main content

Lightweight cognitive protection layer for LLM systems

Project description

🧠 YecoAI Cognitive Layer

LLMs fail silently. This layer doesn’t.

Anti-loop • Amnesia detection • Semantic stability


License Status RAM Python


bannerYeco1

Developed by www.yecoai.com


✨ Why do you need it?

When LLMs enter production, they fail in ways that traditional monitoring misses. They loop, they forget context (Amnesia), and they degrade into "word salad".

Critical Failure What happens? How we fix it
🔄 Infinite Loops Model repeats the same token or phrase forever. Loop Guard detects structural and n-gram repetitions.
😶 Context Amnesia Model ignores initial instructions or shifts topic. Keyword Persistence monitors prompt-to-output alignment.
📉 Semantic Drift Output becomes nonsensical or loses coherence. Stability Metrics evaluate entropy and word distribution.

📊 Benchmarks & Methodology (v1.0.0)

Engineering Note: These results are derived from our Robust 25-Case Stress Test, specifically designed to simulate production failure modes that traditional LLM monitors often miss.

Methodology

  • Stress Dataset: 25 curated edge cases (Loops, Amnesia, Style Drift, False Positive stress tests).
  • Execution: Pure Python deterministic evaluation (no LLM-calling-LLM overhead).
  • Objective: High-precision detection of catastrophic failures with negligible latency.
Metric Result Context
Total Accuracy 96.00% 24/25 edge cases correctly identified.
Loop Detection (F1) 1.00 Zero false negatives on token/phrase loops.
Normal (F1) 0.96 High transparency for valid creative output.
Amnesia (F1) 0.92 Detects context loss within ~0.5ms.
Average RAM 24.95 MB Minimal footprint for edge/container.
Latency (Avg) 1.76 ms Real-time protection without user-perceived delay.

🧩 Core Capabilities

  • 🔁 Multi-level Loop Detector Analyzes structural patterns, n-grams, and Burstiness (irregular repetitions).
  • 🧠 Amnesia Detection Monitors contextual continuity and semantic coherence using Keyword Persistence Tracking.
  • 🧯 Semantic Stability Guard Prevents meaning collapse and nonsensical text output using advanced "Word Salad" metrics.
  • Performance Edge Average RAM usage of only 24.95 MB. Optimized for edge deployment.

🚀 Practical Examples

1. Installation

pip install yecoai-cognitive-layer

2. Protecting an LLM Chatbot (Production Pattern)

This example shows how to use the layer as a "Validator" for a standard LLM response.

from yecoai_cognitive_layer import FeatureEngine, CognitiveModel

# 1. Setup the guards
engine = FeatureEngine()
model = CognitiveModel.load_from_json("weights.json")

def get_safe_llm_response(prompt):
    # Simulate an LLM call (e.g., OpenAI, Anthropic, or Local Llama)
    llm_output = call_your_llm_api(prompt) 
    
    # 2. Cognitive Validation
    vector, features = engine.extract_features(llm_output)
    prediction, scores = model.predict(vector, features)
    
    # 3. Decision Logic
    if prediction == "Loop":
        # If the LLM starts repeating itself, we trigger a retry or a fallback
        return "⚠️ [System Blocked a Loop] Please rephrase your request."
    
    if prediction == "Amnesia" or features['semantic_coherence'] < 0.25:
        # If the response is nonsensical or context is lost
        return "🧠 [Context Loss Detected] I'm having trouble following. Let's restart."

    return llm_output

# Usage
print(get_safe_llm_response("Write a long story about..."))

3. Agent Self-Correction Loop

For autonomous agents, you can use the layer to detect when the agent is "stuck" in a reasoning loop before it consumes too many tokens.

agent_history = []

while agent_running:
    action = agent.think()
    
    _, features = engine.extract_features(action)
    
    if features['repetition_score'] > 0.7 or features['struct_loop_flag'] > 0.5:
        print("🚨 Agent Loop Detected! Injecting 'Break Loop' instruction.")
        agent.inject_system_message("You are repeating yourself. Stop and try a different approach.")
        continue
        
    agent.execute(action)

🤖 Supported Models & Ecosystems

The layer is agnostic and works with any text-generating system:

  • Proprietary: OpenAI (GPT-5.4), Anthropic (Claude 4.7), Google (Gemini 3.1).
  • Open Source: Llama 3 (8B/70B), Mistral/Mixtral, Phi-3, Qwen 3.
  • Local: Ollama, LM Studio, vLLM.
  • Agents: CrewAI, AutoGPT, Microsoft AutoGen.

🏗️ System Architecture

graph TD
    A[LLM Output] --> B[Feature Engine]
    B --> C{Cognitive Model}
    C -->|Normal| D[Validated Output]
    C -->|Loop| E[Block/Regenerate]
    C -->|Amnesia| F[Context Reset]

📄 License

This project is available under a dual licensing model:

🟢 Open Source (Apache 2.0)

Free for:

  • Personal use
  • Research
  • Educational purposes

✔ Modification allowed
✔ Redistribution allowed

🔴 Commercial Use

Use in commercial environments (SaaS, paid products, enterprise systems) requires a commercial license.

See: COMMERCIAL_LICENSE.md


🌐 About Us: YecoAI

YecoAI builds next-generation cognitive systems focused on AI stability and safety.

Website: www.yecoai.com | Discord: Join Community

© 2026 www.yecoai.com
Original Author: Marco (HighMark / YecoAI)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yecoai_cognitive_layer-1.0.0.tar.gz (37.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yecoai_cognitive_layer-1.0.0-py3-none-any.whl (33.1 kB view details)

Uploaded Python 3

File details

Details for the file yecoai_cognitive_layer-1.0.0.tar.gz.

File metadata

  • Download URL: yecoai_cognitive_layer-1.0.0.tar.gz
  • Upload date:
  • Size: 37.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for yecoai_cognitive_layer-1.0.0.tar.gz
Algorithm Hash digest
SHA256 379130c4e0a2bfffc5621262dfa71e98f2d25eb3f2fb338d3f0d406b9e7d2912
MD5 4b02604c252b03633712965ff09a7f4f
BLAKE2b-256 282e1370738b079a03b89461f3b743e7dbde262ac74e684b8585dd2a78e7cf11

See more details on using hashes here.

File details

Details for the file yecoai_cognitive_layer-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for yecoai_cognitive_layer-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5bd66003bd4c20f28c4831f3e9b2243a047c191b44d02d82f742d7be2e7fbb2d
MD5 8572b818d1c6796f232c377bcd1b8dc0
BLAKE2b-256 7728045abac54a03cdf46e36e812942b179ae39773ea2d8e99d5acd0eda202c9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page