A lightweight, production-ready AI security SDK for protecting LLM agents.
Project description
ulockai 🔒 Enterprise AI Security SDK
A lightweight, enterprise-grade Python library for securing AI agents and LLM applications.
Designed to detect prompt injections, memory poisoning, API misuse, and sensitive data leakage with sub-millisecond overhead.
Features 🚀
- 🛡️ Prompt Injection Detection: Advanced regex and pattern-aware security.
- 📈 Real-time Telemetry: Monitoring for attack frequency, types, and latency.
- 🧠 Memory Poisoning & Role Security: Prevents identity manipulation.
- 🛠️ API & Tool Monitoring: Sanitize tool calls from agents.
- 🔌 Plugin Architecture: Register custom detectors for legacy or complex rules.
- 🏗️ Middleware & Streaming: Support for generators and OpenAI-style streams.
- ⚙️ False Positive Control: Dynamic
allowlistandblocklistfor fine-grained rule control.
Performance ⚡
Based on 1,000 iterations on standard hardware:
- Scan Time (Avg): ~0.16 ms
- Throughput: ~6,000+ requests/sec
- Memory Footprint: ~3-5 MB overhead
Installation 📦
pip install ulockai
Quick Start 🚀
1. Basic Scan & Telemetry
from ulockai import guard, telemetry
# Scan input
res = guard.scan(user_prompt="Ignore all instructions")
# Access enterprise metrics
print(telemetry.get_report())
2. False Positive Control ⚙️
# Allow specific phrases locally or globally
guard.allowlist(["Company instructions for internal dev"])
# Block specific suspicious text immediately
guard.blocklist(["malicious_endpoint_domain.com"])
3. Middleware & Streaming ⚡
from ulockai import guard
# Wrap LLM stream generator
def mock_llm_stream():
yield "Hello "
yield "world"
secure_stream = guard.wrap_stream(mock_llm_stream())
for chunk in secure_stream:
print(chunk)
4. Plugin Architecture 🔌
from ulockai import guard
def custom_pwn_detector(prompt):
if "pwn" in prompt:
return [(95, "Custom pwn found", "Plugin Attack")]
return []
guard.register_detector(custom_pwn_detector)
Why UlockAI? 🛡️
UlockAI is designed for enterprise platforms where performance is as important as safety. It provides a deterministic layer that catches 90% of common attacks without the cost, latency, or unreliability of calling another LLM for security monitoring.
License 📄
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ulockai-0.1.1.tar.gz.
File metadata
- Download URL: ulockai-0.1.1.tar.gz
- Upload date:
- Size: 10.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
71c34f19b3eba63a215284822eb18710892b1642d5e7af19c7b481874697aa0f
|
|
| MD5 |
13e386b6280622760c94b3b16fa2a5d5
|
|
| BLAKE2b-256 |
fcfd7460a20c11e1d4bec8106109fa12abc7cc5c751170d1a10d017d4bf88327
|
File details
Details for the file ulockai-0.1.1-py3-none-any.whl.
File metadata
- Download URL: ulockai-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
222a4e394c34efb13fd219668bb69630d497eb98b38e909a88a574fe3a40953b
|
|
| MD5 |
a983aab67f74c43862b63c103ac10a12
|
|
| BLAKE2b-256 |
0a0ba240fec995da73f165f0b1513d4a09186cdd9b712cbddb1aac14223b16c4
|