Skip to main content

Airembr SDK

Project description

AiRembr SDK Documentation

Overview

AiRembr SDK is a software development kit that enables developers to easily store, retrieve, and manage data within the AiRembr memory system — a distributed infrastructure for building AI Based Systems. It provides a seamless interface for integrating AiRembr’s real-time memory into any application, allowing AI agents, enterprise systems, and intelligent apps to capture observations, query contextual memories, and evolve knowledge structures with minimal latency.


What is AiRembr?

AiRembr is a neuroplastic, neurosymbolic distributed memory system designed for real-time AI agents. It captures, synthesizes, and evolves data, enabling large language models to access stored information. It can store both semantic and knowledge-graph-like data. By applying background processes such as entity extraction and identification, AiRembr can further decompose and structure stored facts.

Currently, these processes must be implemented by the developer using the SDK. AiRembr is designed to be open and extensible — we do not limit how you process data or extract knowledge. Future versions will introduce optional built-in background processes, but you’ll always be free to use your own implementations.

The vision behind AiRembr is to provide a framework for anyone to build their own AI memory infrastructure.


✨ Key Features

  • Open Interface – Build modern AI Memory systems, independent of any LLM or architecture
  • Real-Time Processing – Sub-20ms latency with horizontally scalable distributed services
  • Neuroplastic Design – Memories that continuously learn and restructure themselves
  • API-First Architecture – Seamless integration into existing infrastructures
  • Enterprise-Grade – Built for production-scale workloads
  • Neurosymbolic Approach – Combines machine learning with symbolic reasoning for knowledge mining

🧩 Use Cases

  • AI agents with persistent memory
  • Customer data and personalization platforms
  • Healthcare or enterprise knowledge systems
  • Conversational AI with contextual recall
  • Intelligent assistants with memory continuity

⚙️ Installation

Prerequisites

AiRembr requires both the service infrastructure and the SDK library.


Install AiRembr Service

  1. Clone the repository and get the docker-compose.yml file
  2. Run the service:
docker compose up

The service will be available at:

http://localhost:14002

Install AiRembr SDK

pip install airembr-sdk

🚀 Quick Start

Note: Currently, AiRembr supports conversation-scoped memory, but all stored facts are retained for future processing and retrieval.

1. Initialize the Client

from airembr.sdk.client import AiRembrChatClient

client = AiRembrChatClient(
    api="http://localhost:4002",
    source_id="8351737-a9ad-4c29-a01b-2f3180bec592",
    person_instance="person #1",
    person_traits={"name": "Adam", "surname": "Nowak"},
    agent_traits={"name": "ChatGPT", "model": "openai-5"},
    chat_id="chat-1"
)

2. Send Messages

# Person sends a message. This should be somewhere in your chat code. It does not query LLM just saves the message.
client.chat("Hi, how are you?", "person")

# Agent responds
client.chat("I'm fine.", "agent")

3. Retrieve Conversation Memory

# Save Facts (messages)  and retrieve conversation memory for this chat
memory = client.remember(realtime='collect,store,destination')

The remember() method retrieves conversation memory for the specific chat_id. It includes messages, summaries, entities, and contextual metadata — all compressed and indexed for low-latency recall.


🧠 Core Concepts

Observations

Observations are the fundamental data units in AiRembr. Each observation contains:

  • Actor – The entity performing the action (e.g., person or agent)
  • Event – The type of action (e.g., "message")
  • Objects – Data associated with the event

Actors and objects are treated as entities that can be identified and merged. Over time, repeated interactions enrich entities with additional traits and relationships.


Conversation Memory vs. Long-Term Memory

Type Description
Conversation Memory Stores and retrieves messages within a specific chat session. Provides contextual recall and automatic compression when limits are reached. Indexed by chat_id.
Long-Term Memory (Coming Soon) Enables cross-session memory retrieval and semantic search across historical data. Currently requires a custom implementation.

Memory Structure

Each retrieved conversation memory includes:

  • Summary – Compressed representation of previous chat context
  • Entities – Identified actors with their traits
  • Messages – Recent conversation history
  • Context – Temporal and environmental metadata

Example Response

{
    "chat-1": """
        Summary of previous chat:
        Adam and the agent discussed LLM history...

        Entities:
          person -> (name: Adam, surname: Nowak)
          agent -> (name: ChatGPT, model: openai-5)

        Current messages:
          [date] person: Hi, how are you?
          [date] agent: I’m fine.

        Context:
          Now: 2025-11-03 09:39:23 (Monday)
    """
}

🧰 API Reference

AiRembrChatClient

Constructor Parameters

Parameter Type Required Description
api str AiRembr service endpoint URL
source_id str Unique identifier for the data source
person_instance str Identifier for the person instance (entity #ID)
person_traits dict Attributes of the person (e.g., name, email)
agent_traits dict Attributes of the agent (e.g., model, version)
chat_id str Unique identifier for the conversation

chat(message, actor)

Sends a message to the AiRembr system and stores it as an observation.

client.chat("Hello!", "person")

Parameters

  • message (str) – Message text
  • actor (str)"person" or "agent"

remember(realtime)

Retrieves the stored conversation memory for the active chat session.

memory = client.remember(realtime='collect,store,destination')

Parameters

  • realtime (str) – Specifies which parts of the ingestion pipeline run in real time

Returns

  • A dictionary containing memories indexed by chat_id

⚡ Features

  • Automatic Context Compression – Keeps context within window limits while maintaining continuity
  • Multi-Chat Support – Each chat_id maintains its own memory scope
  • Entity Tracking – Identifies and merges entities automatically, evolving over time

🧩 Advanced Usage

Building Long-Term Memory Systems

To extend AiRembr beyond conversation-scoped memory:

  1. Build a Retrieval System

    • Query stored data across sessions
    • Use vector or symbolic search
  2. Implement an Embedding Pipeline

    • Process incoming facts and store embeddings in a database
  3. Design a Retrieval Strategy

    • Combine symbolic and semantic search methods

AiRembr provides the infrastructure foundation — you control how long-term memory and retrieval logic evolve.


Extensibility

AiRembr is designed to be your experimental memory foundry — a sandbox for developing different approaches to AI memory systems. Future versions will include built-in long-term retrieval APIs, but the current release empowers developers to build their own.


🗺️ Roadmap

Planned features for upcoming releases:

  • Built-in long-term memory retrieval across sessions
  • Internal reasoning and reflection mechanisms
  • Memory model training capabilities
  • Pre-built retrieval and embedding add-ons
  • Semantic and hybrid search integrations

📜 SDK License

MIT License © 2025 AiRembr

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

airembr_sdk-0.0.4.tar.gz (151.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

airembr_sdk-0.0.4-py3-none-any.whl (222.2 kB view details)

Uploaded Python 3

File details

Details for the file airembr_sdk-0.0.4.tar.gz.

File metadata

  • Download URL: airembr_sdk-0.0.4.tar.gz
  • Upload date:
  • Size: 151.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for airembr_sdk-0.0.4.tar.gz
Algorithm Hash digest
SHA256 2f6f7a2a16626ed6da5889885480fa892d47dc0bc6ae50e6023911af3e46576a
MD5 7ffd74130fdb4bbe1a16d7260540681e
BLAKE2b-256 a87683f31294c02c6eb48c1d9f764ab777577e11df72da7b62ff42ea5ee244ef

See more details on using hashes here.

File details

Details for the file airembr_sdk-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: airembr_sdk-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 222.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for airembr_sdk-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 fa8c49211fe6101873f908eaaf40665fc518c5cc69e084da73fc08977d44d54e
MD5 c4ff068f4d27fd214bda6db5c89229a5
BLAKE2b-256 bb809fafe476b11fb36dbd545560c81a6a0f16464b9d90caf92d66d9e610ad32

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page