Skip to main content

A brain for AI agents — persona, memory, reflection, and planning

Project description

Agethos

A brain for AI agents — persona, memory, reflection, and planning in one library.

Give any LLM agent a persistent identity with psychological grounding, long-term memory with retrieval scoring, dynamic emotional state, self-reflection, and daily planning.

PyPI Python License


Why

LLM agents have no identity. Every conversation starts from zero — no personality continuity, no memory of past interactions, no emotional consistency.

System prompts give a shallow persona, but agents need more than a static instruction block — they need a cognitive architecture:

  • "How should my personality shape my response to this event?"
  • "What happened last time, and how should that change my behavior now?"
  • "How does this event make me feel, and how does that affect my tone?"

Agethos borrows the answer from cognitive science, personality psychology, and generative agent research.


Differentiators

Agethos Generative Agents CrewAI Character Cards
Personality model OCEAN (Big Five) numerical ISS text only role/goal/backstory text traits
Emotional state PAD 3-axis, OCEAN-coupled None None None
Memory + retrieval recency × importance × relevance Same approach None None
Reflection Importance threshold → focal points → insights Same approach None None
Persona evolution L2 dynamic + emotion drift L2 daily update Static Static
Character card formats W++, SBF, Tavern Card V2 None None Native
Autopilot mode OCEAN-driven triggers + dialogue continuity None Task-based None
LLM-agnostic OpenAI, Anthropic, custom (base_url) OpenAI only Various N/A

Design Philosophy — Four Pillars

1. Psychological Grounding — OCEAN + PAD

Personality isn't just adjectives. Agethos uses the Big Five (OCEAN) model with numerical trait scores:

OceanTraits(
    openness=0.8,          # Creative, curious → metaphorical language
    conscientiousness=0.7,  # Organized → structured responses
    extraversion=0.3,       # Reserved → concise, thoughtful
    agreeableness=0.9,      # Cooperative → empathetic, conflict-avoidant
    neuroticism=0.2,        # Stable → calm under pressure
)

OCEAN traits automatically derive a PAD emotional baseline via Mehrabian (1996):

P = 0.21·E + 0.59·A - 0.19·N  →  Pleasure baseline
A = 0.15·O + 0.30·N - 0.57·A  →  Arousal baseline
D = 0.25·O + 0.17·C + 0.60·E - 0.32·A  →  Dominance baseline

2. Dynamic Emotion — Stimulus → Transition → Decay

Events shift the agent's emotional state. High Neuroticism = higher sensitivity:

Event: "user criticized my work"
  → stimulus PAD: (-0.5, +0.4, -0.3)
  → sensitivity: 0.15 + 0.35 × N  (auto from personality)
  → E(t+1) = E(t) + α·(stimulus - E(t)) + β·baseline
  → closest_emotion() → "sadness" or "anger"

Over time, emotion decays back to personality baseline:
  E(t) = baseline + (current - baseline) · (1 - rate)

3. Layered Persona — Identity that evolves

Three identity layers from Generative Agents + six persona facets from system prompt analysis:

L0 (Innate)      ← Core traits, personality, role. Never changes.
L1 (Learned)     ← Skills, relationships, knowledge. Grows over time.
L2 (Situation)   ← Current task, mood, location. Changes frequently.

+ 6 Facets: identity, tone, values, boundaries, conversation_style, transparency
+ Behavioral Rules: "When X happens, do Y" (more effective than adjectives)

4. Memory Stream — Remember what matters

Retrieval scoring from the Generative Agents paper:

Score = w_r × recency + w_i × importance + w_v × relevance

recency:    0.995^(hours_since_access)
importance: LLM-judged 1-10 per observation
relevance:  cosine similarity (query embedding ↔ memory embedding)

Reflection triggers when importance accumulates > 150:
  → 3 focal points → retrieve related memories → synthesize insights → store as depth=2+ nodes

Demo Results

Two agents with identical questions, different OCEAN profiles — tested with gpt-4o-mini:

Minsoo (Introvert Engineer) Yuna (Extrovert Designer)
OCEAN O=0.8 C=0.9 E=0.2 A=0.6 N=0.3 O=0.9 C=0.4 E=0.9 A=0.8 N=0.6
Baseline emotion calm (P=+0.34) pride (P=+0.55)
Response style Numbered lists, structured, no emojis, short Emojis, metaphors, exclamation marks, follow-up questions
"AI replacing jobs?" "A balanced approach is essential to leverage AI's capabilities while ensuring job security..." "It's like standing at a crossroads! On one hand AI can streamline tasks... What are your thoughts? 🚀✨"
After criticism event calm → calm (P=+0.34→+0.13, small shift) pride → pride (P=+0.55→+0.19, larger shift)
Emotion decay (10 steps) P=+0.13 → +0.32 (recovers toward baseline) P=+0.19 → +0.51 (recovers toward baseline)

Key takeaway: Same LLM, same question — personality shapes tone, structure, emotional reactivity, and recovery. High Neuroticism (N) amplifies emotional response to negative events.

Try it yourself

# Compare two agents side-by-side
python examples/demo_persona.py compare

# Interactive chat with a specific agent
python examples/demo_persona.py chat minsoo
python examples/demo_persona.py chat yuna

# In interactive mode:
#   :emo -0.5 0.4 -0.3   → apply emotional event
#   :decay                → decay emotion toward baseline
#   :q                    → quit

Install

pip install agethos                    # Core (pydantic only)
pip install agethos[openai]            # + OpenAI LLM & embeddings
pip install agethos[anthropic]         # + Anthropic Claude
pip install agethos[all]               # Everything

Quick Start

1. One-liner with Brain.build()

from agethos import Brain

brain = Brain.build(
    persona={
        "name": "Minsoo",
        "ocean": {"O": 0.8, "C": 0.9, "E": 0.2, "A": 0.6, "N": 0.3},
        "innate": {"age": "28", "occupation": "Backend Engineer"},
        "tone": "Concise and analytical",
        "rules": ["Prefer data over opinions", "Keep responses structured"],
    },
    llm="openai",  # or "anthropic"
)
reply = await brain.chat("How's the recommendation system going?")

2. From YAML file

# personas/minsoo.yaml
name: Minsoo
ocean: { O: 0.8, C: 0.9, E: 0.2, A: 0.6, N: 0.3 }
innate:
  age: "28"
  occupation: Backend Engineer
tone: Concise and analytical
rules:
  - Prefer data over opinions
  - Keep responses structured
brain = Brain.build(persona="personas/minsoo.yaml", llm="openai")

3. Full control (traditional style)

from agethos import Brain, PersonaSpec, PersonaLayer, OceanTraits
from agethos.llm.openai import OpenAIAdapter

persona = PersonaSpec(
    name="Minsoo",
    ocean=OceanTraits(
        openness=0.8,
        conscientiousness=0.7,
        extraversion=0.3,
        agreeableness=0.9,
        neuroticism=0.2,
    ),
    l0_innate=PersonaLayer(traits={
        "age": "28",
        "occupation": "Software Engineer",
    }),
    tone="Precise but warm, uses technical terms naturally",
    values=["Code quality", "Knowledge sharing"],
    behavioral_rules=[
        "Include code examples for technical questions",
        "Honestly say 'I don't know' when uncertain",
    ],
)

brain = Brain(persona=persona, llm=OpenAIAdapter(), max_history=20)
reply = await brain.chat("How's the recommendation system going?")
# Multi-turn: brain remembers conversation history automatically
reply2 = await brain.chat("Can you elaborate on the caching part?")

4. Emotional Events

# Apply an event that triggers emotion
brain.apply_event_emotion((-0.5, 0.4, -0.3))  # criticism → sadness/anger
print(brain.emotion.closest_emotion())  # "sadness"

# Emotion decays back to OCEAN baseline over time
brain.decay_emotion(rate=0.1)

5. Random Persona Generation

from agethos import PersonaSpec, OceanTraits

# Fully random persona
spec = PersonaSpec.random()

# Pin what you want, randomize the rest
spec = PersonaSpec.random(name="Minsoo", ocean={"E": 0.2, "N": 0.8})

# Random OCEAN only
ocean = OceanTraits.random()
ocean = OceanTraits.random(E=0.2)  # pin extraversion, randomize rest

# Random persona → Brain in one line
brain = Brain.build(persona=PersonaSpec.random(), llm="openai")

6. Character Card Import (W++ / SBF / Tavern Card)

from agethos import CharacterCard

card = CharacterCard.from_wpp('''
[character("Luna")
{
  Personality("analytical" + "curious" + "dry humor")
  Age("25")
  Occupation("AI Researcher")
}]
''')
brain = Brain.build(persona=card.to_persona_spec(), llm="openai")

Usage Recipes

Customer Support Bot with Personality

brain = Brain.build(
    persona={
        "name": "Hana",
        "ocean": {"O": 0.5, "C": 0.9, "E": 0.7, "A": 0.95, "N": 0.1},
        "innate": {"role": "Customer Support Agent"},
        "tone": "Friendly, patient, solution-oriented",
        "values": ["Customer satisfaction", "Clear communication"],
        "rules": [
            "Always acknowledge the customer's frustration first",
            "Provide step-by-step solutions",
            "Escalate if unable to resolve in 3 exchanges",
        ],
        "boundaries": ["Never share internal system details", "Never make promises about timelines"],
    },
    llm="openai",
)

reply = await brain.chat("My order has been stuck for 3 days!")
# Hana responds with high agreeableness + low neuroticism → calm, empathetic, structured

NPC in a Game — Emotional Reactions

npc = Brain.build(
    persona={
        "name": "Gareth",
        "ocean": {"O": 0.3, "C": 0.8, "E": 0.4, "A": 0.3, "N": 0.7},
        "innate": {"role": "Town Guard", "age": "42"},
        "tone": "Gruff, suspicious, speaks in short sentences",
        "rules": ["Never reveal patrol routes", "Distrust strangers by default"],
    },
    llm="openai",
)

reply = await npc.chat("I need to enter the castle.")
# Low A + high N → suspicious, terse response

# Player does something threatening
npc.apply_event_emotion((-0.6, 0.7, 0.3))  # anger + high arousal
reply = await npc.chat("I said let me through!")
# Now responding with anger-influenced tone

# After time passes, Gareth calms down
for _ in range(5):
    npc.decay_emotion(rate=0.2)

Multi-Agent Conversation

agents = {
    "pm": Brain.build(
        persona={"name": "Sara", "ocean": {"O": 0.7, "C": 0.8, "E": 0.8, "A": 0.7, "N": 0.3},
                 "innate": {"role": "Product Manager"}, "tone": "Big-picture, decisive"},
        llm="openai",
    ),
    "eng": Brain.build(
        persona={"name": "Jin", "ocean": {"O": 0.6, "C": 0.9, "E": 0.2, "A": 0.5, "N": 0.2},
                 "innate": {"role": "Staff Engineer"}, "tone": "Technical, cautious about scope"},
        llm="openai",
    ),
}

# Simulate a discussion
topic = "Should we rewrite the auth system before launch?"
pm_reply = await agents["pm"].chat(topic)
eng_reply = await agents["eng"].chat(f"Sara (PM) said: {pm_reply}\n\nWhat do you think?")

Bulk Random Agents for Simulation

# Spawn 10 random agents for a social simulation
agents = [
    Brain.build(persona=PersonaSpec.random(), llm="openai")
    for _ in range(10)
]

# Each has unique personality, tone, values, and emotional baseline
for agent in agents:
    p = agent.persona
    print(f"{p.name} | E={p.ocean.extraversion:.2f} N={p.ocean.neuroticism:.2f} | {p.tone}")

Situation-Aware Responses

brain = Brain.build(
    persona={"name": "Alex", "ocean": {"O": 0.7, "C": 0.6, "E": 0.5, "A": 0.7, "N": 0.4}},
    llm="openai",
)

# Update L2 situation layer dynamically
brain.update_situation(location="job interview", mood="nervous")
reply = await brain.chat("Tell me about yourself.")
# Response shaped by interview context

brain.update_situation(location="bar with friends", mood="relaxed")
reply = await brain.chat("Tell me about yourself.")
# Same question, completely different tone and content

Memory + Reflection in Long Conversations

brain = Brain.build(
    persona={"name": "Dr. Lee", "ocean": {"O": 0.8, "C": 0.7, "E": 0.5, "A": 0.8, "N": 0.3},
             "innate": {"role": "Therapist"}},
    llm="openai",
)

# Session 1: patient shares concerns
await brain.observe("Patient expressed anxiety about upcoming presentation")
await brain.observe("Patient mentioned difficulty sleeping for the past week")
await brain.observe("Patient has a history of public speaking fear since college")

# Automatic reflection triggers when importance accumulates > 150
# Brain synthesizes: "Patient's sleep issues may be linked to presentation anxiety,
#                     rooted in long-standing public speaking fear"

# Later: memories inform future responses
reply = await brain.chat("I have another presentation next month.")
# Dr. Lee's response draws on stored memories and reflections

Autopilot Mode — Autonomous Agent

from agethos import Brain, Autopilot, QueueEnvironment, EnvironmentEvent

brain = Brain.build(
    persona={
        "name": "Minsoo",
        "ocean": {"O": 0.8, "C": 0.9, "E": 0.8, "A": 0.6, "N": 0.3},
    },
    llm="openai",
)
env = QueueEnvironment()
pilot = brain.autopilot(env)

# Push events — agent reacts autonomously
await env.push(EnvironmentEvent(type="message", content="How's the project?", sender="PM"))
actions = await pilot.step()
# Minsoo (E=0.8) responds eagerly — emotion auto-detected, dialogue tracked

# No events? High-E agents initiate conversation on their own
actions = await pilot.step()  # idle → may speak proactively

# Check dialogue state
print(pilot.dialogue_state)
# {"topic": "project status", "turn_count": 2, "energy": 0.8, ...}

Personality-driven triggers:

OCEAN Trait High Low
E (Extraversion) Responds eagerly, initiates after 1 idle tick Stays silent, initiates after 5+ idle ticks
N (Neuroticism) Strong emotional reaction to negative events Calm, small emotional shifts
O (Openness) Freely redirects to new topics Stays on current topic
A (Agreeableness) Follows conversation partner's lead Disengages if nothing to add

Run as background loop:

import asyncio

task = asyncio.create_task(pilot.run())  # polls every 1s
# ... later
pilot.stop()

Architecture

Autopilot (autonomous loop)
  │
  ├── Environment ─────── poll() events, execute() actions
  ├── EmotionDetector ─── text → PAD (auto)
  ├── DialogueManager ─── conversation continuity (OCEAN-driven)
  │
  └── Brain (Facade)
        │
        ├── PersonaRenderer ──── PersonaSpec → system prompt
        │     ├── PersonaSpec ── L0/L1/L2 + 6 facets + behavioral rules
        │     ├── OceanTraits ── Big Five numerical scores → prompt text
        │     └── EmotionalState  PAD 3-axis → closest emotion → prompt text
        │
        ├── MemoryStream ─────── Append, retrieve, importance tracking
        │     ├── Retrieval ──── recency × importance × relevance scoring
        │     └── StorageBackend (ABC) ── InMemoryStore / custom
        │
        ├── Cognition
        │     ├── Perceiver ──── Observation → MemoryNode (LLM importance 1-10)
        │     ├── Retriever ──── Query memory with composite scoring
        │     ├── Reflector ──── Importance > 150 → focal points → insights
        │     └── Planner ────── Recursive plan decomposition
        │
        ├── Character Cards ──── W++ / SBF / Tavern Card V2 → PersonaSpec
        │
        └── Adapters
              ├── LLMAdapter (ABC) ── OpenAI / Anthropic / custom (base_url)
              └── EmbeddingAdapter (ABC) ── OpenAI / custom

Cognitive Loop

Every brain.chat() call:

User Message
  → [Perceive]  Store as MemoryNode, LLM judges importance (1-10)
  → [Retrieve]  Score all memories: recency + importance + relevance → top-k
  → [Render]    Persona ISS + OCEAN + emotion + memories + plan → system prompt
  → [Generate]  LLM produces response (personality-shaped)
  → [Store]     Own response saved as MemoryNode
  → [Reflect?]  If importance sum > 150 → generate insights automatically

Personality Pipeline

OCEAN Traits (static)
  → PAD baseline (Mehrabian formula)
    → Event stimulus shifts PAD
      → closest_emotion() labels the state
        → Emotion injected into system prompt
          → LLM response shaped by personality + emotion
            → Over time, decay() returns to baseline

Core API

Method Description
Brain.build(persona, llm) Factory — create Brain from dict/yaml/string
brain.chat(message) Full cognitive loop — perceive, retrieve, render, generate, reflect
brain.observe(text) Record external event, auto-reflect if threshold exceeded
brain.plan_day(date) Generate daily plan from persona and memories
brain.reflect() Manual reflection — focal points → insights
brain.recall(query) Search memories by composite score
brain.apply_event_emotion(pad) Shift emotional state by event PAD values
brain.decay_emotion(rate) Decay emotion toward personality baseline
brain.update_situation(**traits) Update L2 situation layer dynamically
brain.clear_history() Clear multi-turn conversation history
brain.autopilot(env) Create Autopilot attached to this brain
pilot.step() Execute one tick of autonomous loop
pilot.run() Run autonomous loop until stop()
pilot.dialogue_state Current dialogue tracking state
PersonaSpec.random(**pins) Generate random persona, pin specific fields
OceanTraits.random(**pins) Generate random OCEAN, pin specific traits
PersonaSpec.from_dict(d) Create persona from dict (shorthand keys supported)
PersonaSpec.from_yaml(path) Load persona from YAML file

Data Models

Model Description
PersonaSpec 3-layer identity + 6 facets + OCEAN + PAD emotion + rules
OceanTraits Big Five: O/C/E/A/N scores (0.0-1.0) with auto prompt generation
EmotionalState PAD 3-axis (-1~+1), stimulus transition, decay, closest emotion
CharacterCard Tavern Card V2 compatible, parsers for W++ and SBF formats
MemoryNode SPO triple, importance, embedding, evidence pointers
DailyPlan Recursive PlanItems with time ranges and status
RetrievalResult Node + score breakdown (recency, importance, relevance)
EnvironmentEvent Event from environment (message, observation, custom)
Action Agent action output (speak, act, silent)

Algorithms

Algorithm Source Implementation
Memory retrieval scoring Generative Agents (Park 2023) memory/retrieval.py
Reflection (focal points → insights) Generative Agents (Park 2023) cognition/reflect.py
OCEAN → PAD conversion Mehrabian (1996) models.py:EmotionalState.from_ocean()
Emotion transition PAD stimulus model models.py:EmotionalState.apply_stimulus()
Emotion decay Exponential return to baseline models.py:EmotionalState.decay()
Personality-sensitivity coupling N → α mapping models.py:PersonaSpec.apply_event()
W++ parsing Community standard models.py:CharacterCard.from_wpp()
SBF parsing Community standard models.py:CharacterCard.from_sbf()

References

Project Status (v0.2.0)

Phase: Autopilot Mode — Published on PyPI

Implemented

Module Status Files
Data Models Done models.py — OceanTraits, EmotionalState, PersonaSpec, PersonaLayer, CharacterCard, MemoryNode, PlanItem, DailyPlan, RetrievalResult, EnvironmentEvent, Action
Brain Facade Done brain.py — chat, observe, plan_day, reflect, recall, emotion control, autopilot
Persona Renderer Done persona/renderer.py — ISS + OCEAN + emotion + memories + plan → system prompt
Memory Stream Done memory/stream.py — append, retrieve (composite scoring), get_recent, importance tracking
Retrieval Scoring Done memory/retrieval.py — recency × importance × relevance, min-max normalization, cosine similarity
Storage Backend Done memory/store.py (ABC) + storage/memory_store.py (InMemoryStore)
Cognition: Perceive Done cognition/perceive.py — observation → MemoryNode (LLM importance 1-10, SPO triple extraction)
Cognition: Retrieve Done cognition/retrieve.py — composite scoring wrapper, reflection-specific retrieval
Cognition: Reflect Done cognition/reflect.py — importance threshold → focal points → insights → depth=2+ nodes
Cognition: Plan Done cognition/plan.py — daily plan, recursive decompose, replan on new observations
Cognition: Emotion Done cognition/emotion.py — text → PAD auto-detection via LLM
Cognition: Dialogue Done cognition/dialogue.py — OCEAN-driven conversation continuity (continue/redirect/disengage/initiate)
Autopilot Done autopilot.py — autonomous loop with step()/run(), personality-driven triggers
Environment Done environment.py — Environment ABC + QueueEnvironment
LLM Adapters Done llm/openai.py (OpenAI + compatible via base_url), llm/anthropic.py (Anthropic Claude)
Embedding Adapter Done embedding/openai.py (text-embedding-3-small/large/ada-002)
Character Cards Done models.py — W++ parser, SBF parser, Tavern Card V2 → PersonaSpec conversion
Multi-turn Chat Done brain.py — sliding window conversation history (max_history)
Factory Methods Done Brain.build() from dict/yaml/string, PersonaSpec.from_dict(), from_yaml()
Random Generation Done OceanTraits.random(), PersonaSpec.random() with partial pinning
YAML Personas Done examples/personas/ — load persona from YAML file
CI/CD Done .github/workflows/ — CI tests + PyPI publish via trusted publisher

Not Yet Implemented

Item Notes
Persistent storage backend SQLite, Redis, etc. — currently InMemory only
Anthropic embedding adapter Only OpenAI embeddings available
Tavern Card V2 export Import only, no export to card format
L1/L2 persona auto-evolution Layers exist but no automatic update logic from interactions
Plan-based proactive actions Autopilot reacts to events but doesn't yet execute plans on schedule

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agethos-0.2.0.tar.gz (40.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agethos-0.2.0-py3-none-any.whl (44.8 kB view details)

Uploaded Python 3

File details

Details for the file agethos-0.2.0.tar.gz.

File metadata

  • Download URL: agethos-0.2.0.tar.gz
  • Upload date:
  • Size: 40.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agethos-0.2.0.tar.gz
Algorithm Hash digest
SHA256 3cde8134ad14e169381a3b9471ed2d1e27a1d8b63370f3748f23d43b09ee881e
MD5 c0ff069d7316053201a587dd0eddb830
BLAKE2b-256 86b42691d3bd7f58c224556aa53da34459de0ea99994c0ae841642b10623da50

See more details on using hashes here.

Provenance

The following attestation bundles were made for agethos-0.2.0.tar.gz:

Publisher: publish.yml on jinsoo96/agethos

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agethos-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: agethos-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 44.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agethos-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3c849ab4b822eff9ea82f4817255c394c3fef63c8b68768c8b93b7defe792e97
MD5 c48e8be778f56344484c9c9ae706a4ce
BLAKE2b-256 eb534664c94dba0571945624be0beb090ff1e8277f8e0aa7692d01a7b87e28b0

See more details on using hashes here.

Provenance

The following attestation bundles were made for agethos-0.2.0-py3-none-any.whl:

Publisher: publish.yml on jinsoo96/agethos

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page