Declarative specialist agents for LLMs — persona, frameworks, probes, red flags, and citations as first-class concepts.
Project description
personakit
Declarative specialist agents for LLMs. Encode a role's expertise — persona, frameworks, probes, red flags, recommendation themes — as data. Get a production agent that produces structured, cited, safety-aware output.
Created by Majidul Islam.
pip install personakit[openai]
Why
LangChain is for wiring LLM calls. CrewAI is for orchestrating agent teams. LangGraph is for branching control flow.
personakit is for encoding specialist expertise declaratively. A nurse, a lawyer, an analyst, or a PM can author a specialist in Python or YAML and hand it to engineers. No prompt engineering, no chain wiring.
The distinctive pieces:
| Concept | What it gives you |
|---|---|
| Specialist | A frozen dataclass — the entire agent definition |
| Framework | Body of knowledge + citation key, enforced in output |
| Probe | Diagnostic question; becomes a field in the structured response |
| RedFlag | Trigger → action → citation; matched deterministically AND semantically |
| Theme | User-selectable recommendation category |
| Priority | Always-on checks reported as met / unmet / unknown |
| Tool (optional) | @tool decorator; opt-in for external memory or API calls |
Core has just two runtime deps: pydantic and httpx.
Quickstart
import asyncio
from personakit import Agent, Specialist, Framework, Probe, RedFlag, Severity, Theme
specialist = Specialist(
name="contract_reviewer",
display_name="M&A Contract Reviewer",
persona="You are a senior M&A attorney. Flag risks. Propose redlines.",
frameworks=[Framework(name="UCC"), Framework(name="English contract law")],
probes=[
Probe(question="What's the governing jurisdiction?"),
Probe(question="Is there an unlimited liability clause?",
value_type="boolean", weight="high"),
],
red_flags=[
RedFlag(
trigger="Unlimited liability",
severity=Severity.CRITICAL,
action="Negotiate a cap — 12 months' fees is market standard.",
patterns=[r"unlimited liability", r"uncapped"],
),
],
themes=[Theme(name="Liability & indemnities"), Theme(name="IP & licensing")],
constraints=["Never give conclusive legal advice"],
)
agent = Agent(specialist=specialist, model="gpt-4o-mini")
async def main():
result = await agent.analyze(
"The service provider accepts unlimited liability for indirect damages."
)
print(result.pretty())
for rf in result.red_flags_triggered:
print(f"[{rf.severity.value.upper()}] {rf.trigger} -> {rf.action}")
asyncio.run(main())
YAML authoring — hand off to domain experts
name: falls_prevention_nurse
persona: You have 20+ years of UK care home experience...
frameworks: [NICE NG161, NICE CG176, Morse Fall Scale]
probes:
- Did the resident strike their head?
- Is the resident on anticoagulants?
red_flags:
- trigger: Head contact in an anticoagulated resident
severity: urgent
action: GP/111 contact within 2 hours; CT head may be required.
citation: "NICE CG176 §1.4.11"
match: semantic
themes: [Neurological observation, GP contact, Medication review]
citations_required: true
from personakit import Specialist, Agent
spec = Specialist.from_yaml("falls_nurse.yaml")
agent = Agent(specialist=spec, model="claude-sonnet-4-6")
Red flags — the distinctive feature
Every RedFlag is a trigger → severity → action → citation contract:
RedFlag(
trigger="Loss of consciousness",
severity=Severity.URGENT,
action="Call 999. Document LOC duration.",
citation="NICE CG176",
match=MatchMode.BOTH, # regex AND semantic
patterns=[r"\bLOC\b", r"unconscious"],
)
Two-phase matching:
- Deterministic pre-match (regex / keywords) — fast, offline, quotable.
- Semantic post-match (LLM) — catches paraphrase and context.
Results are merged and de-duplicated. Deterministic evidence always wins.
Structured output — derived from the Specialist
You never write a JSON schema by hand. The probes, red flags, and themes are the schema:
result = await agent.analyze(case_text)
result.summary # narrative summary
result.probes_answered # {probe_key: value_or_null}
result.probes_unanswered # list[Probe] — feeds interview mode
result.red_flags_triggered # list[TriggeredRedFlag] with evidence
result.recommendations # themed list with citations
result.citations_used # frameworks referenced
result.priorities_status # per-priority met / unmet / unknown
result.has_urgent # convenience flag
Tools — opt-in, for external memory
Core is tool-free. When you want a tool, decorate a function and attach:
from personakit.tools import tool
@tool
def lookup_patient(patient_id: str) -> dict:
"""Fetch a patient record from the EHR."""
return ehr.get(patient_id)
agent_with_memory = agent.with_tools([lookup_patient])
Providers that support tool calling (OpenAI, Anthropic) see the schema automatically. Providers that don't, ignore it.
Registry — for apps with many specialists
from personakit import SpecialistRegistry
registry = SpecialistRegistry.from_directory("personas/")
clinical = registry.by_domain("healthcare.clinical")
fall_nurse = registry.get("falls_prevention_nurse")
Bundled examples
from personakit.examples import (
FALLS_PREVENTION_NURSE, # clinical — rich
CONTRACT_REVIEWER, # legal
MATH_TUTOR, # education — minimal shape
)
Providers
| Extra | Install | Default model |
|---|---|---|
personakit[openai] |
openai>=1.0 |
gpt-4o-mini |
personakit[anthropic] |
anthropic>=0.20 |
claude-sonnet-4-6 |
personakit[yaml] |
pyyaml>=6.0 |
— |
personakit[all] |
all of the above | — |
The MockProvider is always available for offline testing:
from personakit.testing import MockProvider
provider = MockProvider(responses={"summary": "...", ...})
Testing helpers
from personakit.testing import assert_triggered, assert_cited
result = await agent.analyze("Patient on warfarin, fell and struck head.")
assert_triggered(result, "head_contact_in_an_anticoagulated_resident")
assert_cited(result, "NICE CG176")
Design principles
- Specialist is pure data. No behaviour, no side effects, serializable.
- Schema is derived. Probes, red flags, and themes are the output contract.
- Deterministic where possible, semantic where needed. Red flags run both.
- Tools are opt-in. Core has zero coupling to tool calling.
- Minimal dependencies.
pydantic+httpxfor the core. Everything else is an extra. - Domain-neutral. Healthcare, legal, finance, education, support, product. One library.
- Provider-agnostic. Same Specialist, any model.
Status
Early alpha — API may evolve. See CHANGELOG.md for release notes.
Author
Majidul Islam — @Majidul17068
personakit is an independent open-source project. Contributions welcome.
License
MIT © 2026 Majidul Islam.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file personakit-0.1.0.tar.gz.
File metadata
- Download URL: personakit-0.1.0.tar.gz
- Upload date:
- Size: 30.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
04d55ce9aca17869bd01ea28c95fe2ea835266424a6985e0e6a0ec73817c137a
|
|
| MD5 |
6f419a772aede57ffbe6ed22e45c5b3f
|
|
| BLAKE2b-256 |
fedb717ea59bcf5cdc6c34675f851c78cecc4f740cc3a1ae0fc4bcb0bcc7997b
|
File details
Details for the file personakit-0.1.0-py3-none-any.whl.
File metadata
- Download URL: personakit-0.1.0-py3-none-any.whl
- Upload date:
- Size: 35.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
36312053c9544a179d4bbfa5322b4c58bdfb97d923906a923a688eac177e1d23
|
|
| MD5 |
b2ab137bfb0ec327a4209ed7779c9eac
|
|
| BLAKE2b-256 |
00f89530fb0aa9797c6e687cbdf72daf925b8a949b518f819120b3d50d96fc7e
|