Skip to main content

Python SDK for the HippoDid character memory API

Project description

hippodid

Python SDK for HippoDid -- character memory for AI agents.

HippoDid gives your AI agents persistent identity: personality, background, rules, structured memories, and agent configuration. This SDK wraps the REST API and adds client-side context assembly for any LLM framework.

pip install hippodid

Quick Start

from hippodid import HippoDid

hd = HippoDid(api_key="hd_your_key")

# Create a character
char = hd.create_character(name="Ada", description="Senior engineer")

# Set up her profile
hd.update_profile(
    char.id,
    system_prompt="You are Ada, a senior software engineer.",
    personality="Analytical, thorough, loves clean architecture",
    rules=["Always suggest tests", "Prefer functional patterns"],
)

# Add memories
hd.add_memory(char.id, "Ada led the migration from REST to GraphQL in Q3")
hd.add_memory(char.id, "Ada prefers Rust for systems work, Python for scripting")

# Search memories
results = hd.search_memories(char.id, "programming languages")

Context Assembly

The killer feature: assemble_context builds a complete LLM prompt from character profile + memories in one call.

import anthropic
from hippodid import HippoDid

hd = HippoDid(api_key="hd_your_key")
claude = anthropic.Anthropic()

# One call: fetch profile + search memories + format prompt
context = hd.assemble_context(char_id, "What should we refactor?", strategy="task_focused")

response = claude.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    system=context.formatted_prompt,
    messages=[{"role": "user", "content": "What should we refactor?"}],
)

# Save the exchange
hd.add_memory(char_id, f"Discussed refactoring: {response.content[0].text}")

Assembly Strategies

Same character data, different prompt formatting:

Strategy Best For Emphasis
default General use system_prompt + profile + memories by relevance
conversational Chat agents Personality/tone, recent episodic memories
task_focused Work agents Rules/constraints, project context, decisions
concierge Service agents Preferences/history, proactive suggestions
matching Cross-character Profile-heavy, minimal memories

Async Support

from hippodid import AsyncHippoDid

async with AsyncHippoDid(api_key="hd_your_key") as hd:
    char = await hd.create_character(name="Ada")
    await hd.add_memory(char.id, "Ada loves async Python")
    context = await hd.assemble_context(char.id, "async patterns")

Character Templates & Batch Create

# Create a template for batch character creation
template = hd.create_character_template(
    name="Sales Rep",
    field_mappings=[
        {"sourceColumn": "name", "targetField": "name"},
        {"sourceColumn": "crm_id", "targetField": "externalId"},
    ],
)

# Batch create from a list of dicts (also accepts pandas DataFrames or file paths)
job = hd.batch_create_characters(
    template_id=template.id,
    data=[
        {"name": "Alice", "crm_id": "SF-001"},
        {"name": "Bob", "crm_id": "SF-002"},
    ],
    external_id_column="crm_id",
)

# Poll for completion
status = hd.get_batch_job_status(job.job_id)

Agent Config

Store LLM preferences per character:

hd.set_agent_config(
    char_id,
    system_prompt="You are Ada, a senior engineer.",
    preferred_model="claude-sonnet-4-20250514",
    temperature=0.3,
    tools=["code_search", "run_tests"],
)

# RAG: ask a question using character's memories + stored agent config
result = hd.ask(char_id, "What patterns should we use?", use_agent_config=True)
print(result.answer)

Memory Modes

Control how add_memory processes content:

hd.set_memory_mode(char_id, "VERBATIM")   # Store exact content, zero LLM cost
hd.set_memory_mode(char_id, "EXTRACTED")   # AI extracts structured facts (default)
hd.set_memory_mode(char_id, "HYBRID")      # Both extraction + verbatim (Business+)

Clone Characters

clone = hd.clone_character(
    char_id,
    "Ada-Staging",
    copy_memories=True,
    copy_tags=True,
)
print(f"Cloned: {clone.character.id}, {clone.memories_copied} memories copied")

Framework Examples

See the examples/ directory:

API Reference

Full documentation at docs.hippodid.com.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hippodid-0.1.0.tar.gz (24.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hippodid-0.1.0-py3-none-any.whl (18.9 kB view details)

Uploaded Python 3

File details

Details for the file hippodid-0.1.0.tar.gz.

File metadata

  • Download URL: hippodid-0.1.0.tar.gz
  • Upload date:
  • Size: 24.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for hippodid-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e0e67cbe0562c97ab5401fd4dd54b3cbf60986f1d798d86f4b630c288811aac9
MD5 4110b2de73a72da9d8290017f4c52b25
BLAKE2b-256 6923de212492d21e83c04bf07d13ebb14dc68e098ee0ce5336eddffa3b308250

See more details on using hashes here.

File details

Details for the file hippodid-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: hippodid-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 18.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for hippodid-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cff62015037cee625d2e522e41ee8ba4a377a9633639457168bd8c29c2eeaf48
MD5 4c10959b13975a83b8e87a7cba1e485b
BLAKE2b-256 73508332e1820000cc2ee94d4d48e3d74cb8e417621940ba17b4b8f495f597a0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page