Skip to main content

AI Prompt Engineer Agent — prompt preprocessing, security screening, and context enrichment for LLM systems

Project description

AIPEA — AI Prompt Engineer Agent

Python 3.11+ License: MIT Security Policy CI codecov PyPI Downloads

A standalone Python library for prompt preprocessing, security screening, query analysis, and context enrichment for LLM systems. Extracted from Agora IV production (v4.1.49).

Security: Report vulnerabilities privately via GitHub Security Advisories or email security@undercurrentholdings.com. See SECURITY.md for scope, response SLAs, and honest framing of what AIPEA's compliance modes do and do not enforce.

Architecture

AIPEA processes prompts through a multi-stage pipeline:

User Query → SecurityScanner → QueryAnalyzer → SearchOrchestrator → PromptEngine → Enhanced Prompt
                  │                   │                 │                  │
              PII/PHI scan      Tier routing      Context fetch     Model-specific
              Classification    Domain detect     Knowledge base    prompt formatting
              Injection guard   Complexity score   MCP providers    Tier processing

Core Modules

Module Purpose
security PII/PHI detection, classification markers, injection prevention, compliance modes
analyzer Query complexity scoring, domain detection, temporal awareness, tier routing
search Multi-provider search orchestration (Exa, Firecrawl, Context7)
knowledge Offline knowledge base with SQLite storage and domain-aware retrieval
engine Model-specific prompt formatting, tier-based processing (Offline/Tactical/Strategic)
enhancer High-level facade coordinating the full pipeline

Processing Tiers

Tier Latency Use Case
Offline <2s Air-gapped, classified, simple queries
Tactical 2-5s Standard queries with search context
Strategic 5-15s Complex research, multi-source synthesis

Installation

# Library only (no CLI)
pip install aipea

# With CLI tools (adds Typer + Rich)
pip install aipea[cli]

# From source (development)
pip install -e ".[dev]"

Getting Started

AIPEA works out of the box with zero configuration. API keys and Ollama are entirely optional — they unlock richer enhancement when available.

Path 1: Minimal (no setup needed)

pip install aipea
from aipea import enhance_prompt
result = await enhance_prompt("What is quantum computing?", model_id="gpt-4")
# Works immediately with template-based enhancement — no API keys required

Path 2: With Search Providers (real-time web context)

Exa provides AI-powered web search; Firecrawl provides structured web content retrieval. Both offer free tiers.

pip install aipea[cli]
aipea configure          # interactive wizard — press Enter to skip any key

Path 3: With Ollama (local LLM enhancement)

Ollama runs open-source LLMs locally for richer offline enhancement. AIPEA auto-detects it when available.

# macOS
brew install ollama

# Linux
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a lightweight model
ollama pull gemma3:1b

# Populate the offline knowledge base
aipea seed-kb

Run aipea doctor at any time to see what capabilities are active and what you can add.

Configuration

All API keys are optional. AIPEA can be configured via environment variables, a .env file, or ~/.aipea/config.toml. Priority: env vars > .env > global TOML > defaults.

# Interactive setup wizard (requires [cli] extra)
aipea configure

# Save to global config instead of project .env
aipea configure --global

# Check current configuration
aipea check

# Full diagnostic report
aipea doctor

Or configure manually with environment variables:

export EXA_API_KEY="your-exa-key"
export FIRECRAWL_API_KEY="your-firecrawl-key"
export AIPEA_HTTP_TIMEOUT=30  # seconds (optional)

Or create a .env file in your project root:

EXA_API_KEY="your-exa-key"
FIRECRAWL_API_KEY="your-firecrawl-key"

Usage

Quick Start — enhance_prompt

The simplest way to use AIPEA is through the enhance_prompt facade:

import asyncio
from aipea import enhance_prompt

async def main():
    result = await enhance_prompt(
        "What are the latest advances in transformer architectures?",
        model_id="gpt-5.2",
    )
    print(result.enhanced_prompt)
    print(f"Processing tier: {result.processing_tier}")
    print(f"Security context: {result.security_context.security_level}")

asyncio.run(main())

Security Scanning

from aipea import SecurityScanner, SecurityContext, SecurityLevel, ComplianceMode

scanner = SecurityScanner()
context = SecurityContext(
    security_level=SecurityLevel.UNCLASSIFIED,
    compliance_mode=ComplianceMode.HIPAA,
)

# Scan for PII, PHI, classification markers, and injection attempts
result = scanner.scan("Patient John Doe, SSN 123-45-6789, diagnosed with...", context)
print(result.has_pii())       # True
print(result.has_phi())       # True
print(result.is_blocked)      # False (PII is flagged, not blocked)
print(result.flags)           # ["pii_detected:ssn", "phi_detected:diagnosis", ...]

Query Analysis

from aipea import QueryAnalyzer

analyzer = QueryAnalyzer()
analysis = analyzer.analyze("Compare CRISPR-Cas9 efficiency across cell types in 2026 studies")

print(analysis.query_type)          # QueryType.RESEARCH
print(analysis.suggested_tier)      # ProcessingTier.STRATEGIC
print(analysis.complexity)          # 0.85
print(analysis.needs_current_info)  # True (detected temporal reference)
print(analysis.domain_indicators)   # ["biology", "genetics"]

Offline Knowledge Base

import asyncio
from aipea import OfflineKnowledgeBase, StorageTier, KnowledgeDomain

async def main():
    kb = OfflineKnowledgeBase("/path/to/knowledge.db", StorageTier.STANDARD)

    # Add domain knowledge
    await kb.add_knowledge(
        "Transformer attention mechanism computes Q*K^T/sqrt(d_k) for scaled dot-product attention.",
        domain=KnowledgeDomain.TECHNICAL,
    )

    # Search knowledge base
    results = await kb.search("attention mechanism", limit=5)
    for node in results.nodes:
        print(f"[{node.relevance_score:.2f}] {node.content[:100]}")

    kb.close()

asyncio.run(main())

Search Orchestration

import asyncio
from aipea import SearchOrchestrator

async def main():
    orchestrator = SearchOrchestrator()

    # Multi-provider search with strategy selection
    results = await orchestrator.search(
        "quantum error correction 2026",
        strategy="multi_source",
        num_results=10,
    )

    for result in results.results:
        print(f"[{results.source}] {result.title}: {result.url}")

asyncio.run(main())

Integration

AIPEA is designed as a standalone preprocessing layer for LLM systems. It integrates with:

  • AEGIS Governance — engineering standards & compliance SDK (pip install aegis-governance[aipea])
  • Agora IV — multi-model orchestration platform (uses AIPEA for prompt preprocessing)

Enterprise & Governance

AIPEA is free and open-source. For organizations that need full AI governance — risk registers, model cards, compliance auditing, and policy enforcement — see AEGIS, Undercurrent AI's governance platform.

AIPEA's compliance modes (HIPAA, TACTICAL) provide runtime security controls, with FEDRAMP support planned. AEGIS adds the organizational layer: audit trails, human oversight workflows, and regulatory reporting.

Learn more at undercurrentholdings.com.

Development

# Install with dev dependencies
pip install -e ".[dev]"

# Run all checks
make all        # format + lint + type check + tests

# Individual commands
make fmt         # Ruff format + auto-fix
make lint        # Ruff check + format check
make type        # mypy strict mode
make test        # pytest with coverage (75% minimum)
make sec         # Security-focused lint rules
make ci          # CI parity (lint + type + test, no autofix)

Testing

# Full test suite with coverage
pytest tests/ -v --cov=src/aipea --cov-report=term-missing

# Run specific module tests
pytest tests/test_security.py -v
pytest tests/test_analyzer.py -v
pytest tests/test_engine.py -v

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aipea-1.3.3.tar.gz (408.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aipea-1.3.3-py3-none-any.whl (99.0 kB view details)

Uploaded Python 3

File details

Details for the file aipea-1.3.3.tar.gz.

File metadata

  • Download URL: aipea-1.3.3.tar.gz
  • Upload date:
  • Size: 408.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aipea-1.3.3.tar.gz
Algorithm Hash digest
SHA256 ee597f51d0acee51ed33b5fe58214b1d716a25d55ffe85654211ecfb7229dc83
MD5 4ea26f1d45f5dd9937dfb5cdd4135217
BLAKE2b-256 95e680c504c4c88a5909e10deac638deace1b1395cca57b15f6dced2446da715

See more details on using hashes here.

Provenance

The following attestation bundles were made for aipea-1.3.3.tar.gz:

Publisher: publish.yml on undercurrentai/AIPEA

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aipea-1.3.3-py3-none-any.whl.

File metadata

  • Download URL: aipea-1.3.3-py3-none-any.whl
  • Upload date:
  • Size: 99.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aipea-1.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 cf74d6ee9c4a599ffd0a1f2a84c87042ce7df464c8a2df05a07965e6c0da9560
MD5 1b1e978ca01c63aa3256842791ee3ce8
BLAKE2b-256 74acc4a06ada509d015c9bf2ce7ad682670d1d2813df6afeb3ee9b61d8e9ffec

See more details on using hashes here.

Provenance

The following attestation bundles were made for aipea-1.3.3-py3-none-any.whl:

Publisher: publish.yml on undercurrentai/AIPEA

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page