Skip to main content

Python SDK for the Cerebe Cognitive Services Platform

Project description

cerebe

PyPI version License: MIT Python 3.10+

Python SDK for the Cerebe Cognitive Services Platform — memory, knowledge graphs, meta-learning, and agent tooling.

Installation

pip install cerebe

Quick Start

from cerebe import Cerebe

client = Cerebe(api_key="ck_live_xxx", project="proj_xxx")

# Store a memory
client.memory.add(
    "User prefers dark mode",
    "sess_123",
    type="semantic",
    importance=0.8,
)

# Search memories
results = client.memory.search("user preferences", "sess_123")
print(results.data)

Async

from cerebe import AsyncCerebe

async with AsyncCerebe(api_key="ck_live_xxx") as client:
    results = await client.memory.search("user preferences", "sess_123")

Every method available on Cerebe has an identical async counterpart on AsyncCerebe.

Configuration

Parameter Type Default Description
api_key str Required. Your Cerebe API key
project str "" Project identifier
base_url str https://api.cerebe.ai API base URL
timeout float 30.0 Request timeout (seconds)
max_retries int 3 Max retries on 429/5xx

Environment variable fallbacks: CEREBE_API_KEY, CEREBE_PROJECT, CEREBE_BASE_URL.

API Reference

Memory — client.memory

Method Description
add(content, session_id, ...) Store a memory
search(query, session_id, ...) Semantic similarity search
get(memory_id) Get a memory by ID
update(memory_id, ...) Update memory properties
delete(memory_id) Delete a memory
session(session_id, ...) Get all memories for a session
relationships(source, target, type) Create memory relationship
query_tune(session_id, message) Tune query for retrieval
harvest(session_id, transcript, ...) Extract memories from transcript
consolidate(entity_id, ...) Merge near-duplicate memories

Memory types: episodic, semantic, procedural, sequential, execution_history, plan, tool_reliability, working, declarative

# Store with full options
client.memory.add(
    "Learned quadratic formula today",
    "sess_123",
    type="episodic",
    importance=0.9,
    entity_id="user_42",
    metadata={"subject": "math"},
)

# Search with filters
results = client.memory.search(
    "math concepts",
    "sess_123",
    types=["episodic", "semantic"],
    min_importance=0.5,
    limit=10,
)

# Harvest memories from conversation
client.memory.harvest(
    "sess_123",
    "User: I find visual explanations helpful...",
    entity_id="user_42",
)

Knowledge — client.knowledge

Method Description
ingest(content, ...) Add content to the knowledge graph
query(query, ...) Query the knowledge graph
entities(...) List entities
visualize(query, ...) Get graph visualization data
client.knowledge.ingest(
    "Photosynthesis converts light energy to chemical energy",
    entity_id="biology_101",
    source="textbook",
)

graph = client.knowledge.query("photosynthesis", depth=3)

Storage — client.storage

Method Description
upload(content, filename, content_type) Upload file (base64)
presigned_upload(file_name, file_type, ...) Get presigned upload URL
get(upload_id) Get file metadata
get_url(upload_id) Get ephemeral download URL
file_url(file_id) Get download URL by file ID
check_hash(content_hash) Deduplication check
analyze_content(upload_id, ...) Analyze file content
extract(url, ...) Extract content from URL
# Get presigned upload URL
result = client.storage.presigned_upload(
    file_name="essay.pdf",
    file_type="application/pdf",
    file_size=102400,
    content_hash="sha256_abc123",
    tenant_id="tenant_1",
)

# Analyze uploaded content
client.storage.analyze_content("up_123", context="student_homework")

Meta-Learning — client.meta_learning

Method Description
analyze(user_id, ...) Analyze learning patterns
profile(user_id) Get learner profile
plre_transition(user_id, session_id, ...) Trigger PLRE phase transition
plre_state(user_id, ...) Get current PLRE state
profile = client.meta_learning.profile("user_42")

state = client.meta_learning.plre_state("user_42", session_id="sess_123")

Agents — client.agents

Method Description
ingest_trace(content, session_id, ...) Store agent execution trace
set_working_memory(content, session_id, ...) Set session working memory
get_working_memory(session_id) Get session working memory
# Ingest an agent trace
client.agents.ingest_trace(
    "Called search tool with query 'quadratic formula'",
    "sess_123",
    metadata={"tool": "search", "latency_ms": 120},
)

# Set working memory with TTL
client.agents.set_working_memory(
    "Current task: help user with algebra homework",
    "sess_123",
    ttl_seconds=3600,
)

Sessions — client.sessions

Method Description
list() List all sessions
get(session_id) Get session details
update(session_id, cognitive_state) Update cognitive state
delete(session_id) Delete a session
cleanup() Clean up expired sessions

Graph — client.graph

Method Description
traverse(...) Traverse from a starting entity
temporal(entity_id, as_of_date) Temporal entity view
neighbors(entity_id) Get immediate neighbors

RAG — client.rag

Retrieval-Augmented Generation: embed documents into a tenant-scoped vector space and retrieve semantically relevant chunks.

Method Description
search(query, k=5, ...) Semantic document search
hybrid_search(query, k=5, semantic_weight=0.7, keyword_weight=0.3, ...) Weighted semantic + keyword search
find_similar(content, k=5, ...) Find documents similar to given content
embed(source, content, doc_type="markdown", metadata=None) Embed a single document
embed_batch(documents) Embed multiple documents in one call
list_documents() Enumerate all embedded documents
delete_document(source) Delete a document and its chunks
stats() Collection statistics
from cerebe import Cerebe

client = Cerebe(api_key="ck_live_xxx")

# Embed a document
client.rag.embed(source="docs/auth.md", content=open("docs/auth.md").read())

# Semantic search
results = client.rag.search("how does authentication work?", k=3)
for r in results.data["results"]:
    print(r["source"], r["score"])

# Hybrid search (semantic + keyword)
results = client.rag.hybrid_search("auth middleware", semantic_weight=0.8)

# Find similar documents
results = client.rag.find_similar("JWT token validation flow")

# Collection stats and cleanup
stats = client.rag.stats()
client.rag.delete_document("docs/auth.md")

Every document and query is isolated by organization — results never leak across tenants.

Error Handling

from cerebe import Cerebe
from cerebe._errors import (
    AuthenticationError,
    NotFoundError,
    RateLimitError,
    ValidationError,
    ServerError,
)

client = Cerebe(api_key="ck_live_xxx")

try:
    client.memory.get("mem_nonexistent")
except NotFoundError:
    print("Memory not found")
except RateLimitError as e:
    print(f"Rate limited, retry after {e.retry_after}s")
except AuthenticationError:
    print("Invalid API key")
Error Class HTTP Status Description
AuthenticationError 401 Invalid or missing API key
NotFoundError 404 Resource not found
ValidationError 400, 422 Invalid request parameters
RateLimitError 429 Rate limit exceeded
ServerError 5xx Server-side error

Retries

The SDK automatically retries on:

  • 429 (Rate Limited) — respects Retry-After header
  • 5xx (Server Error) — exponential backoff

Set max_retries=0 to disable.

Type Safety

The SDK is fully typed with py.typed marker (PEP 561). Works with mypy, pyright, and IDE autocomplete.

from cerebe.resources.memory import MemoryType

memory_type: MemoryType = "semantic"  # type-checked literal

Requirements

  • Python >= 3.10
  • httpx >= 0.25.0
  • pydantic >= 2.0.0

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cerebe-0.5.0.tar.gz (31.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cerebe-0.5.0-py3-none-any.whl (31.2 kB view details)

Uploaded Python 3

File details

Details for the file cerebe-0.5.0.tar.gz.

File metadata

  • Download URL: cerebe-0.5.0.tar.gz
  • Upload date:
  • Size: 31.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for cerebe-0.5.0.tar.gz
Algorithm Hash digest
SHA256 da8a3fa9ea0bb896a8684cf02d7cdac215165d391690c7e949c797c150e7cbb5
MD5 400a9783d8367a9a7b304146b6df16b6
BLAKE2b-256 73ee487d83b444b97959dfc6b6c26077d904ea685e1c38e527c22ce980cdfcd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for cerebe-0.5.0.tar.gz:

Publisher: sdk-release.yml on momentiq-ai/cerebe-platform

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cerebe-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: cerebe-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 31.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for cerebe-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 018bf35ef845986d22c12adba00191173e29cfadd928cefe7d22f6ef08504690
MD5 90900ab1f19852642cd18ab4a782867f
BLAKE2b-256 422d976a1e8a50c7a59e25875db15b1f14034e0e69817bcb6c7bcdfd576d0652

See more details on using hashes here.

Provenance

The following attestation bundles were made for cerebe-0.5.0-py3-none-any.whl:

Publisher: sdk-release.yml on momentiq-ai/cerebe-platform

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page