A brain for AI agents — persona, memory, reflection, and planning
Project description
Agethos
A brain for AI agents — persona, memory, reflection, and planning in one library.
Give any LLM agent a persistent identity with psychological grounding, long-term memory with retrieval scoring, dynamic emotional state, self-reflection, and daily planning.
Why
LLM agents have no identity. Every conversation starts from zero — no personality continuity, no memory of past interactions, no emotional consistency.
System prompts give a shallow persona, but agents need more than a static instruction block — they need a cognitive architecture:
- "How should my personality shape my response to this event?"
- "What happened last time, and how should that change my behavior now?"
- "How does this event make me feel, and how does that affect my tone?"
Agethos borrows the answer from cognitive science, personality psychology, and generative agent research.
Differentiators
| Agethos | Generative Agents | CrewAI | Character Cards | |
|---|---|---|---|---|
| Personality model | OCEAN (Big Five) numerical | ISS text only | role/goal/backstory | text traits |
| Emotional state | PAD 3-axis, OCEAN-coupled | None | None | None |
| Memory + retrieval | recency × importance × relevance | Same approach | None | None |
| Reflection | Importance threshold → focal points → insights | Same approach | None | None |
| Persona evolution | L2 dynamic + emotion drift | L2 daily update | Static | Static |
| Character card formats | W++, SBF, Tavern Card V2 | None | None | Native |
| LLM-agnostic | OpenAI, Anthropic, custom | OpenAI only | Various | N/A |
Design Philosophy — Four Pillars
1. Psychological Grounding — OCEAN + PAD
Personality isn't just adjectives. Agethos uses the Big Five (OCEAN) model with numerical trait scores:
OceanTraits(
openness=0.8, # Creative, curious → metaphorical language
conscientiousness=0.7, # Organized → structured responses
extraversion=0.3, # Reserved → concise, thoughtful
agreeableness=0.9, # Cooperative → empathetic, conflict-avoidant
neuroticism=0.2, # Stable → calm under pressure
)
OCEAN traits automatically derive a PAD emotional baseline via Mehrabian (1996):
P = 0.21·E + 0.59·A - 0.19·N → Pleasure baseline
A = 0.15·O + 0.30·N - 0.57·A → Arousal baseline
D = 0.25·O + 0.17·C + 0.60·E - 0.32·A → Dominance baseline
2. Dynamic Emotion — Stimulus → Transition → Decay
Events shift the agent's emotional state. High Neuroticism = higher sensitivity:
Event: "user criticized my work"
→ stimulus PAD: (-0.5, +0.4, -0.3)
→ sensitivity: 0.15 + 0.35 × N (auto from personality)
→ E(t+1) = E(t) + α·(stimulus - E(t)) + β·baseline
→ closest_emotion() → "sadness" or "anger"
Over time, emotion decays back to personality baseline:
E(t) = baseline + (current - baseline) · (1 - rate)
3. Layered Persona — Identity that evolves
Three identity layers from Generative Agents + six persona facets from system prompt analysis:
L0 (Innate) ← Core traits, personality, role. Never changes.
L1 (Learned) ← Skills, relationships, knowledge. Grows over time.
L2 (Situation) ← Current task, mood, location. Changes frequently.
+ 6 Facets: identity, tone, values, boundaries, conversation_style, transparency
+ Behavioral Rules: "When X happens, do Y" (more effective than adjectives)
4. Memory Stream — Remember what matters
Retrieval scoring from the Generative Agents paper:
Score = w_r × recency + w_i × importance + w_v × relevance
recency: 0.995^(hours_since_access)
importance: LLM-judged 1-10 per observation
relevance: cosine similarity (query embedding ↔ memory embedding)
Reflection triggers when importance accumulates > 150:
→ 3 focal points → retrieve related memories → synthesize insights → store as depth=2+ nodes
Demo Results
Two agents with identical questions, different OCEAN profiles — tested with gpt-4o-mini:
| Minsoo (Introvert Engineer) | Yuna (Extrovert Designer) | |
|---|---|---|
| OCEAN | O=0.8 C=0.9 E=0.2 A=0.6 N=0.3 | O=0.9 C=0.4 E=0.9 A=0.8 N=0.6 |
| Baseline emotion | calm (P=+0.34) | pride (P=+0.55) |
| Response style | Numbered lists, structured, no emojis, short | Emojis, metaphors, exclamation marks, follow-up questions |
| "AI replacing jobs?" | "A balanced approach is essential to leverage AI's capabilities while ensuring job security..." | "It's like standing at a crossroads! On one hand AI can streamline tasks... What are your thoughts? 🚀✨" |
| After criticism event | calm → calm (P=+0.34→+0.13, small shift) | pride → pride (P=+0.55→+0.19, larger shift) |
| Emotion decay (10 steps) | P=+0.13 → +0.32 (recovers toward baseline) | P=+0.19 → +0.51 (recovers toward baseline) |
Key takeaway: Same LLM, same question — personality shapes tone, structure, emotional reactivity, and recovery. High Neuroticism (N) amplifies emotional response to negative events.
Try it yourself
# Compare two agents side-by-side
python examples/demo_persona.py compare
# Interactive chat with a specific agent
python examples/demo_persona.py chat minsoo
python examples/demo_persona.py chat yuna
# In interactive mode:
# :emo -0.5 0.4 -0.3 → apply emotional event
# :decay → decay emotion toward baseline
# :q → quit
Install
pip install agethos # Core (pydantic only)
pip install agethos[openai] # + OpenAI LLM & embeddings
pip install agethos[anthropic] # + Anthropic Claude
pip install agethos[all] # Everything
Quick Start
1. One-liner with Brain.build()
from agethos import Brain
brain = Brain.build(
persona={
"name": "Minsoo",
"ocean": {"O": 0.8, "C": 0.9, "E": 0.2, "A": 0.6, "N": 0.3},
"innate": {"age": "28", "occupation": "Backend Engineer"},
"tone": "Concise and analytical",
"rules": ["Prefer data over opinions", "Keep responses structured"],
},
llm="openai", # or "anthropic"
)
reply = await brain.chat("How's the recommendation system going?")
2. From YAML file
# personas/minsoo.yaml
name: Minsoo
ocean: { O: 0.8, C: 0.9, E: 0.2, A: 0.6, N: 0.3 }
innate:
age: "28"
occupation: Backend Engineer
tone: Concise and analytical
rules:
- Prefer data over opinions
- Keep responses structured
brain = Brain.build(persona="personas/minsoo.yaml", llm="openai")
3. Full control (traditional style)
from agethos import Brain, PersonaSpec, PersonaLayer, OceanTraits
from agethos.llm.openai import OpenAIAdapter
persona = PersonaSpec(
name="Minsoo",
ocean=OceanTraits(
openness=0.8,
conscientiousness=0.7,
extraversion=0.3,
agreeableness=0.9,
neuroticism=0.2,
),
l0_innate=PersonaLayer(traits={
"age": "28",
"occupation": "Software Engineer",
}),
tone="Precise but warm, uses technical terms naturally",
values=["Code quality", "Knowledge sharing"],
behavioral_rules=[
"Include code examples for technical questions",
"Honestly say 'I don't know' when uncertain",
],
)
brain = Brain(persona=persona, llm=OpenAIAdapter(), max_history=20)
reply = await brain.chat("How's the recommendation system going?")
# Multi-turn: brain remembers conversation history automatically
reply2 = await brain.chat("Can you elaborate on the caching part?")
4. Emotional Events
# Apply an event that triggers emotion
brain.apply_event_emotion((-0.5, 0.4, -0.3)) # criticism → sadness/anger
print(brain.emotion.closest_emotion()) # "sadness"
# Emotion decays back to OCEAN baseline over time
brain.decay_emotion(rate=0.1)
5. Random Persona Generation
from agethos import PersonaSpec, OceanTraits
# Fully random persona
spec = PersonaSpec.random()
# Pin what you want, randomize the rest
spec = PersonaSpec.random(name="Minsoo", ocean={"E": 0.2, "N": 0.8})
# Random OCEAN only
ocean = OceanTraits.random()
ocean = OceanTraits.random(E=0.2) # pin extraversion, randomize rest
# Random persona → Brain in one line
brain = Brain.build(persona=PersonaSpec.random(), llm="openai")
6. Character Card Import (W++ / SBF / Tavern Card)
from agethos import CharacterCard
card = CharacterCard.from_wpp('''
[character("Luna")
{
Personality("analytical" + "curious" + "dry humor")
Age("25")
Occupation("AI Researcher")
}]
''')
brain = Brain.build(persona=card.to_persona_spec(), llm="openai")
Usage Recipes
Customer Support Bot with Personality
brain = Brain.build(
persona={
"name": "Hana",
"ocean": {"O": 0.5, "C": 0.9, "E": 0.7, "A": 0.95, "N": 0.1},
"innate": {"role": "Customer Support Agent"},
"tone": "Friendly, patient, solution-oriented",
"values": ["Customer satisfaction", "Clear communication"],
"rules": [
"Always acknowledge the customer's frustration first",
"Provide step-by-step solutions",
"Escalate if unable to resolve in 3 exchanges",
],
"boundaries": ["Never share internal system details", "Never make promises about timelines"],
},
llm="openai",
)
reply = await brain.chat("My order has been stuck for 3 days!")
# Hana responds with high agreeableness + low neuroticism → calm, empathetic, structured
NPC in a Game — Emotional Reactions
npc = Brain.build(
persona={
"name": "Gareth",
"ocean": {"O": 0.3, "C": 0.8, "E": 0.4, "A": 0.3, "N": 0.7},
"innate": {"role": "Town Guard", "age": "42"},
"tone": "Gruff, suspicious, speaks in short sentences",
"rules": ["Never reveal patrol routes", "Distrust strangers by default"],
},
llm="openai",
)
reply = await npc.chat("I need to enter the castle.")
# Low A + high N → suspicious, terse response
# Player does something threatening
npc.apply_event_emotion((-0.6, 0.7, 0.3)) # anger + high arousal
reply = await npc.chat("I said let me through!")
# Now responding with anger-influenced tone
# After time passes, Gareth calms down
for _ in range(5):
npc.decay_emotion(rate=0.2)
Multi-Agent Conversation
agents = {
"pm": Brain.build(
persona={"name": "Sara", "ocean": {"O": 0.7, "C": 0.8, "E": 0.8, "A": 0.7, "N": 0.3},
"innate": {"role": "Product Manager"}, "tone": "Big-picture, decisive"},
llm="openai",
),
"eng": Brain.build(
persona={"name": "Jin", "ocean": {"O": 0.6, "C": 0.9, "E": 0.2, "A": 0.5, "N": 0.2},
"innate": {"role": "Staff Engineer"}, "tone": "Technical, cautious about scope"},
llm="openai",
),
}
# Simulate a discussion
topic = "Should we rewrite the auth system before launch?"
pm_reply = await agents["pm"].chat(topic)
eng_reply = await agents["eng"].chat(f"Sara (PM) said: {pm_reply}\n\nWhat do you think?")
Bulk Random Agents for Simulation
# Spawn 10 random agents for a social simulation
agents = [
Brain.build(persona=PersonaSpec.random(), llm="openai")
for _ in range(10)
]
# Each has unique personality, tone, values, and emotional baseline
for agent in agents:
p = agent.persona
print(f"{p.name} | E={p.ocean.extraversion:.2f} N={p.ocean.neuroticism:.2f} | {p.tone}")
Situation-Aware Responses
brain = Brain.build(
persona={"name": "Alex", "ocean": {"O": 0.7, "C": 0.6, "E": 0.5, "A": 0.7, "N": 0.4}},
llm="openai",
)
# Update L2 situation layer dynamically
brain.update_situation(location="job interview", mood="nervous")
reply = await brain.chat("Tell me about yourself.")
# Response shaped by interview context
brain.update_situation(location="bar with friends", mood="relaxed")
reply = await brain.chat("Tell me about yourself.")
# Same question, completely different tone and content
Memory + Reflection in Long Conversations
brain = Brain.build(
persona={"name": "Dr. Lee", "ocean": {"O": 0.8, "C": 0.7, "E": 0.5, "A": 0.8, "N": 0.3},
"innate": {"role": "Therapist"}},
llm="openai",
)
# Session 1: patient shares concerns
await brain.observe("Patient expressed anxiety about upcoming presentation")
await brain.observe("Patient mentioned difficulty sleeping for the past week")
await brain.observe("Patient has a history of public speaking fear since college")
# Automatic reflection triggers when importance accumulates > 150
# Brain synthesizes: "Patient's sleep issues may be linked to presentation anxiety,
# rooted in long-standing public speaking fear"
# Later: memories inform future responses
reply = await brain.chat("I have another presentation next month.")
# Dr. Lee's response draws on stored memories and reflections
Architecture
Brain (Facade)
│
├── PersonaRenderer ──── PersonaSpec → system prompt
│ ├── PersonaSpec ── L0/L1/L2 + 6 facets + behavioral rules
│ ├── OceanTraits ── Big Five numerical scores → prompt text
│ └── EmotionalState PAD 3-axis → closest emotion → prompt text
│
├── MemoryStream ─────── Append, retrieve, importance tracking
│ ├── Retrieval ──── recency × importance × relevance scoring
│ └── StorageBackend (ABC) ── InMemoryStore / custom
│
├── Cognition
│ ├── Perceiver ──── Observation → MemoryNode (LLM importance 1-10)
│ ├── Retriever ──── Query memory with composite scoring
│ ├── Reflector ──── Importance > 150 → focal points → insights
│ └── Planner ────── Recursive plan decomposition
│
├── Character Cards ──── W++ / SBF / Tavern Card V2 → PersonaSpec
│
└── Adapters
├── LLMAdapter (ABC) ── OpenAI / Anthropic / custom
└── EmbeddingAdapter (ABC) ── OpenAI / custom
Cognitive Loop
Every brain.chat() call:
User Message
→ [Perceive] Store as MemoryNode, LLM judges importance (1-10)
→ [Retrieve] Score all memories: recency + importance + relevance → top-k
→ [Render] Persona ISS + OCEAN + emotion + memories + plan → system prompt
→ [Generate] LLM produces response (personality-shaped)
→ [Store] Own response saved as MemoryNode
→ [Reflect?] If importance sum > 150 → generate insights automatically
Personality Pipeline
OCEAN Traits (static)
→ PAD baseline (Mehrabian formula)
→ Event stimulus shifts PAD
→ closest_emotion() labels the state
→ Emotion injected into system prompt
→ LLM response shaped by personality + emotion
→ Over time, decay() returns to baseline
Core API
| Method | Description |
|---|---|
Brain.build(persona, llm) |
Factory — create Brain from dict/yaml/string |
brain.chat(message) |
Full cognitive loop — perceive, retrieve, render, generate, reflect |
brain.observe(text) |
Record external event, auto-reflect if threshold exceeded |
brain.plan_day(date) |
Generate daily plan from persona and memories |
brain.reflect() |
Manual reflection — focal points → insights |
brain.recall(query) |
Search memories by composite score |
brain.apply_event_emotion(pad) |
Shift emotional state by event PAD values |
brain.decay_emotion(rate) |
Decay emotion toward personality baseline |
brain.update_situation(**traits) |
Update L2 situation layer dynamically |
brain.clear_history() |
Clear multi-turn conversation history |
PersonaSpec.random(**pins) |
Generate random persona, pin specific fields |
OceanTraits.random(**pins) |
Generate random OCEAN, pin specific traits |
PersonaSpec.from_dict(d) |
Create persona from dict (shorthand keys supported) |
PersonaSpec.from_yaml(path) |
Load persona from YAML file |
Data Models
| Model | Description |
|---|---|
PersonaSpec |
3-layer identity + 6 facets + OCEAN + PAD emotion + rules |
OceanTraits |
Big Five: O/C/E/A/N scores (0.0-1.0) with auto prompt generation |
EmotionalState |
PAD 3-axis (-1~+1), stimulus transition, decay, closest emotion |
CharacterCard |
Tavern Card V2 compatible, parsers for W++ and SBF formats |
MemoryNode |
SPO triple, importance, embedding, evidence pointers |
DailyPlan |
Recursive PlanItems with time ranges and status |
RetrievalResult |
Node + score breakdown (recency, importance, relevance) |
Algorithms
| Algorithm | Source | Implementation |
|---|---|---|
| Memory retrieval scoring | Generative Agents (Park 2023) | memory/retrieval.py |
| Reflection (focal points → insights) | Generative Agents (Park 2023) | cognition/reflect.py |
| OCEAN → PAD conversion | Mehrabian (1996) | models.py:EmotionalState.from_ocean() |
| Emotion transition | PAD stimulus model | models.py:EmotionalState.apply_stimulus() |
| Emotion decay | Exponential return to baseline | models.py:EmotionalState.decay() |
| Personality-sensitivity coupling | N → α mapping | models.py:PersonaSpec.apply_event() |
| W++ parsing | Community standard | models.py:CharacterCard.from_wpp() |
| SBF parsing | Community standard | models.py:CharacterCard.from_sbf() |
References
- Generative Agents: Interactive Simulacra of Human Behavior — Memory stream, reflection, planning
- Mehrabian PAD Model (1996) — Pleasure-Arousal-Dominance emotional space
- Big Five / OCEAN — Five-factor personality model
- BIG5-CHAT (2024) — Big Five personality in LLM conversations
- Machine Mindset (MBTI) — MBTI-based LLM personality tuning
- JPAF: Evolving Personality — Jung function weights for dynamic personality
- Character Card V2 Spec — Tavern Card standard
- Leaked System Prompts — Real-world persona patterns
Project Status (v0.1.0)
Phase: Core Architecture Complete — Pre-release
Implemented
| Module | Status | Files |
|---|---|---|
| Data Models | Done | models.py — OceanTraits, EmotionalState, PersonaSpec, PersonaLayer, CharacterCard, MemoryNode, PlanItem, DailyPlan, RetrievalResult |
| Brain Facade | Done | brain.py — chat, observe, plan_day, reflect, recall, emotion control |
| Persona Renderer | Done | persona/renderer.py — ISS + OCEAN + emotion + memories + plan → system prompt |
| Memory Stream | Done | memory/stream.py — append, retrieve (composite scoring), get_recent, importance tracking |
| Retrieval Scoring | Done | memory/retrieval.py — recency × importance × relevance, min-max normalization, cosine similarity |
| Storage Backend | Done | memory/store.py (ABC) + storage/memory_store.py (InMemoryStore) |
| Cognition: Perceive | Done | cognition/perceive.py — observation → MemoryNode (LLM importance 1-10, SPO triple extraction) |
| Cognition: Retrieve | Done | cognition/retrieve.py — composite scoring wrapper, reflection-specific retrieval |
| Cognition: Reflect | Done | cognition/reflect.py — importance threshold → focal points → insights → depth=2+ nodes |
| Cognition: Plan | Done | cognition/plan.py — daily plan, recursive decompose, replan on new observations |
| LLM Adapters | Done | llm/openai.py (OpenAI), llm/anthropic.py (Anthropic Claude) |
| Embedding Adapter | Done | embedding/openai.py (text-embedding-3-small/large/ada-002) |
| Character Cards | Done | models.py — W++ parser, SBF parser, Tavern Card V2 → PersonaSpec conversion |
| Multi-turn Chat | Done | brain.py — sliding window conversation history (max_history) |
| Factory Methods | Done | Brain.build() from dict/yaml/string, PersonaSpec.from_dict(), from_yaml() |
| Random Generation | Done | OceanTraits.random(), PersonaSpec.random() with partial pinning |
| YAML Personas | Done | examples/personas/ — load persona from YAML file |
Not Yet Implemented
| Item | Notes |
|---|---|
| Persistent storage backend | SQLite, Redis, etc. — currently InMemory only |
| Anthropic embedding adapter | Only OpenAI embeddings available |
| PyPI publish | Package configured (pyproject.toml) but not yet published |
| CI/CD | No GitHub Actions / workflows |
| Tavern Card V2 export | Import only, no export to card format |
| L1/L2 persona auto-evolution | Layers exist but no automatic update logic from interactions |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agethos-0.1.0.tar.gz.
File metadata
- Download URL: agethos-0.1.0.tar.gz
- Upload date:
- Size: 34.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
abbad98b786a3b25079be37a551f4409e88046c64ebeb149118cdfa05750f162
|
|
| MD5 |
15fc4bee71c71bf76384d8a63dae3cf4
|
|
| BLAKE2b-256 |
db55eaec657f2059508d44fb6b1f09e55e41d679272af5e40beb69be7a2438e4
|
Provenance
The following attestation bundles were made for agethos-0.1.0.tar.gz:
Publisher:
publish.yml on jinsoo96/agethos
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agethos-0.1.0.tar.gz -
Subject digest:
abbad98b786a3b25079be37a551f4409e88046c64ebeb149118cdfa05750f162 - Sigstore transparency entry: 1171638828
- Sigstore integration time:
-
Permalink:
jinsoo96/agethos@ecc3db946d861b2e5f8794e91eaafd41201e660f -
Branch / Tag:
refs/heads/main - Owner: https://github.com/jinsoo96
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ecc3db946d861b2e5f8794e91eaafd41201e660f -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file agethos-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agethos-0.1.0-py3-none-any.whl
- Upload date:
- Size: 37.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4b9d77fb28b70e0877a45cf3776fc780376fd95dabd88fc836523a1dac15718e
|
|
| MD5 |
d3976b14ecc1a407abff087689f52f9d
|
|
| BLAKE2b-256 |
32c6b5531e997d382b5fcdf3f7f935841835999ef55b775fbbbf72fce8d0b9a7
|
Provenance
The following attestation bundles were made for agethos-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on jinsoo96/agethos
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agethos-0.1.0-py3-none-any.whl -
Subject digest:
4b9d77fb28b70e0877a45cf3776fc780376fd95dabd88fc836523a1dac15718e - Sigstore transparency entry: 1171638846
- Sigstore integration time:
-
Permalink:
jinsoo96/agethos@ecc3db946d861b2e5f8794e91eaafd41201e660f -
Branch / Tag:
refs/heads/main - Owner: https://github.com/jinsoo96
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ecc3db946d861b2e5f8794e91eaafd41201e660f -
Trigger Event:
workflow_dispatch
-
Statement type: