Lightweight validation, repair, and retry helpers for LLM outputs.
Project description
llmshield
llmshield is a lightweight Python library for validating AI/LLM outputs before your app trusts them.
Goals
- Validate JSON and structured outputs
- Validate against Pydantic models
- Validate through a small focused core API
- Detect likely secrets and simple PII patterns
- Support custom validation rules
- Produce clear errors and retry prompts for LLM repair loops
- Best-effort JSON repair
- Optional JSON Schema validation
- Provider hooks for direct LLM calls
Installation
python3 -m pip install llmshield-ai
The PyPI package is llmshield-ai; the Python import is llmshield.
Quick start
from llmshield import Json, NoSecrets, validate
result = validate(
'{"name": "Ada", "age": 32}',
rules=[Json(), NoSecrets()],
)
if result.ok:
print(result.value) # parsed JSON when JsonRule is used
else:
print(result.errors)
print(result.to_retry_prompt())
Pydantic schema validation
from pydantic import BaseModel
from llmshield import validate
from llmshield.rules import JsonRule, PydanticRule
class UserProfile(BaseModel):
name: str
age: int
email: str
result = validate(
'{"name": "Ada", "age": 32, "email": "ada@example.com"}',
rules=[JsonRule(), PydanticRule(UserProfile)],
)
print(result.ok)
print(result.value)
JSON repair
from llmshield import validate
from llmshield.rules import JsonRule
result = validate('{name: "Ada", age: 32,}', rules=[JsonRule(repair=True)])
print(result.ok)
print(result.value)
print(result.warnings)
JSON Schema validation
from llmshield import validate
from llmshield.rules import JsonRule, JsonSchemaRule
schema = {
"type": "object",
"required": ["name", "age"],
"properties": {
"name": {"type": "string"},
"age": {"type": "integer", "minimum": 0},
},
}
result = validate('{"name": "Ada", "age": 32}', rules=[JsonRule(), JsonSchemaRule(schema)])
Install JSON Schema support with:
python3 -m pip install "llmshield-ai[jsonschema]"
Retry prompts
from llmshield import build_json_retry_prompt
if not result.ok:
prompt = build_json_retry_prompt(result, schema=schema)
Streaming validation
from llmshield.experimental import StreamingTextValidator
stream = StreamingTextValidator(max_chars=1000, detect_secrets=True)
for chunk in chunks:
result = stream.feed(chunk)
if not result.ok:
break
Semantic and judge rules
from llmshield.experimental import ContainsAnyRule, SimilarityRule, LLMJudgeRule
# LLMJudgeRule accepts your own judge function returning bool, score, or ValidationResult.
Custom rules
from llmshield import validate
from llmshield.rules import CustomRule
result = validate("hello", rules=[
CustomRule(lambda value, context: len(value) > 3, message="Output is too short")
])
Hook directly into LLM calls
You can wrap any LLM function so validation happens immediately after the model call. If validation fails, llmshield can retry with an error-aware correction prompt.
from llmshield import Guard, Json
def call_llm(prompt):
# Replace this with OpenAI, Anthropic, Ollama, etc.
return '{"answer": "Paris"}'
shield = Guard(rules=[Json()], max_retries=1)
result = shield.call(call_llm, prompt="Return JSON with an answer field.")
if result.ok:
print(result.value) # parsed/validated value
print(result.response) # original provider response
else:
print(result.errors)
There are also provider-style wrappers:
from llmshield import Json
from llmshield.integrations.openai import OpenAIChatShield
shielded_chat = OpenAIChatShield(client, rules=[Json()], max_retries=1)
result = shielded_chat.create(model="gpt-4.1-mini", messages=[...])
See examples/14_hook_generic_llm_call.py, examples/15_hook_openai_style_client.py, and examples/16_hook_anthropic_style_client.py.
Detailed examples
See docs/usage.md for detailed explanations and examples/ for runnable code examples covering:
- text validation
- JSON validation and repair
- Pydantic and JSON Schema validation
- PII and secret detection
- regex and custom business rules
- retry prompts
- streaming validation
- semantic and LLM-judge validation
- OpenAI and Anthropic response helpers
- production-style retry loops
- direct hooks into generic, OpenAI-style, and Anthropic-style LLM calls
Status
Early MVP. The core API focuses on structured LLM output validation, JSON repair, retries, safety checks, and direct LLM-call guards. Streaming and semantic checks live under llmshield.experimental.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmshield_ai-0.1.0.tar.gz.
File metadata
- Download URL: llmshield_ai-0.1.0.tar.gz
- Upload date:
- Size: 122.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b97d5f4b17ec8b912c568c12e2f5301c157d8cc215e19ea09aac8a3c7c7a98ea
|
|
| MD5 |
bd1257fe044ae5a2c3449c5f33e65c2e
|
|
| BLAKE2b-256 |
2b1f256d961c13d7fa5082b1eb16140fa2fa0bda9c1c853aebe35291999a29b4
|
File details
Details for the file llmshield_ai-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llmshield_ai-0.1.0-py3-none-any.whl
- Upload date:
- Size: 20.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
22219d2770b6bfef7d7eaf0ce68f2012d33f7ac2da9c9e71e709a4140444df38
|
|
| MD5 |
20cacf31ea8d34ff3e3d5e5cf9e2b7cc
|
|
| BLAKE2b-256 |
aae491c3a68fc6392be8d8123dc0419d1fd9ea96defd026c71d2e83c425bb167
|