Skip to main content

VertexAI Memory integration for Autogen agents

Project description

autogen-vertexai-memory

VertexAI Memory integration for Autogen agents. Store and retrieve agent memories using Google Cloud's VertexAI Memory service with semantic search capabilities.

Features

  • ๐Ÿง  Persistent Memory Storage - Store agent memories in Google Cloud VertexAI
  • ๐Ÿ” Semantic Search - Find relevant memories using natural language queries
  • ๐Ÿ”„ Automatic Context Updates - Seamlessly inject memories into chat contexts
  • โšก Async/Await Support - Full async API compatible with Autogen's runtime
  • ๐ŸŽฏ User-Scoped Isolation - Multi-tenant memory management
  • ๐Ÿ› ๏ธ Tool Integration - Ready-to-use tools for agent workflows

Installation

pip install autogen-vertexai-memory

Prerequisites

  1. Google Cloud Project with VertexAI API enabled
  2. Authentication configured (Application Default Credentials)
  3. VertexAI Memory Resource created in your project
# Set up authentication
gcloud auth application-default login

# Enable VertexAI API
gcloud services enable aiplatform.googleapis.com

Quick Start

Basic Memory Usage

from autogen_vertexai_memory import VertexaiMemory, VertexaiMemoryConfig
from autogen_core.memory import MemoryContent, MemoryMimeType

# Configure memory
config = VertexaiMemoryConfig(
    api_resource_name="projects/my-project/locations/us-central1/......./",
    project_id="my-project",
    location="us-central1",
    user_id="user123"
)

memory = VertexaiMemory(config=config)

# Store a memory
await memory.add(
    content=MemoryContent(
        content="User prefers concise responses and uses Python",
        mime_type=MemoryMimeType.TEXT
    )
)

# Semantic search for relevant memories
results = await memory.query(query="programming preferences")
for mem in results.results:
    print(mem.content)
# Output: User prefers concise responses and uses Python

# Retrieve all memories
all_memories = await memory.query(query="")

Using with Autogen Agents

from autogen_core.model_context import ChatCompletionContext
from autogen_core.models import UserMessage

# Create chat context
context = ChatCompletionContext()

# Add user message
await context.add_message(
    UserMessage(content="What programming language should I use?")
)

# Inject relevant memories into context
result = await memory.update_context(context)
print(f"Added {len(result.memories.results)} memories to context")

# Now the agent has access to stored preferences

Environment Variables

You can also configure using environment variables:

export VERTEX_PROJECT_ID="my-project"
export VERTEX_LOCATION="us-central1"
export VERTEX_USER_ID="user123"
export VERTEX_API_RESOURCE_NAME="projects/my-project/locations/us-central1/memories/agent-memory"
# Auto-loads from environment
config = VertexaiMemoryConfig()
memory = VertexaiMemory(config=config)

Memory Tools for Agents

Integrate memory capabilities directly into your Autogen agents:

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_vertexai_memory.tools import (
    SearchVertexaiMemoryTool,
    UpdateVertexaiMemoryTool,
    VertexaiMemoryToolConfig
)

# Configure memory tools
memory_config = VertexaiMemoryToolConfig(
    project_id="my-project",
    location="us-central1",
    user_id="user123",
    api_resource_name="projects/my-project/locations/us-central1/memories/agent-memory"
)

# Create memory tools
search_tool = SearchVertexaiMemoryTool(config=memory_config)
update_tool = UpdateVertexaiMemoryTool(config=memory_config)

# Create agent with memory tools
agent = AssistantAgent(
    name="memory_assistant",
    model_client=OpenAIChatCompletionClient(model="gpt-4"),
    tools=[search_tool, update_tool],
    system_message="""You are a helpful assistant with memory capabilities.
    
    Use search_vertexai_memory_tool to retrieve relevant information about the user.
    Use update_vertexai_memory_tool to store important facts you learn during conversations.
    """
)

# Now the agent can search and store memories automatically!
# Example conversation:
# User: "I prefer Python for data analysis"
# Agent uses update_vertexai_memory_tool to store this preference
# 
# Later...
# User: "What language should I use for my data project?"
# Agent uses search_vertexai_memory_tool, retrieves the preference, and responds accordingly

API Reference

VertexaiMemoryConfig

Configuration model for VertexAI Memory.

VertexaiMemoryConfig(
    api_resource_name: str,  # Full resource name: "projects/{project}/locations/{location}/"
    project_id: str,         # Google Cloud project ID
    location: str,           # GCP region (e.g., "us-central1", "europe-west1")
    user_id: str            # Unique user identifier for memory isolation
)

Environment Variables:

  • VERTEX_API_RESOURCE_NAME
  • VERTEX_PROJECT_ID
  • VERTEX_LOCATION
  • VERTEX_USER_ID

VertexaiMemory

Main memory interface implementing Autogen's Memory protocol.

VertexaiMemory(
    config: Optional[VertexaiMemoryConfig] = None,
    client: Optional[Client] = None
)

Methods:

add(content, cancellation_token=None)

Store a new memory.

await memory.add(
    content=MemoryContent(
        content="Important fact to remember",
        mime_type=MemoryMimeType.TEXT
    )
)

query(query="", cancellation_token=None, **kwargs)

Search memories or retrieve all.

# Semantic search (top 3 results)
results = await memory.query(query="user preferences")

# Get all memories
all_results = await memory.query(query="")

Returns: MemoryQueryResult with list of MemoryContent objects

update_context(model_context)

Inject memories into chat context as system message.

context = ChatCompletionContext()
result = await memory.update_context(context)
# Context now includes relevant memories

Returns: UpdateContextResult with retrieved memories

clear()

โš ๏ธ Permanently delete all memories (irreversible).

await memory.clear()  # Use with caution!

close()

Cleanup resources (currently a no-op but provided for protocol compliance).

await memory.close()

Memory Tools

VertexaiMemoryToolConfig

Shared configuration for memory tools.

VertexaiMemoryToolConfig(
    project_id: str,
    location: str,
    user_id: str,
    api_resource_name: str
)

SearchVertexaiMemoryTool

Tool for semantic memory search. Automatically used by agents to retrieve relevant memories.

SearchVertexaiMemoryTool(config: Optional[VertexaiMemoryToolConfig] = None, **kwargs)

Tool Name: search_vertexai_memory_tool
Description: Perform a search with given parameters using vertexai memory bank
Parameters:

  • query (str): Semantic search query to retrieve information about user
  • top_k (int, default=5): Maximum number of relevant memories to retrieve

Returns: List of matching memory strings

UpdateVertexaiMemoryTool

Tool for storing new memories. Automatically used by agents to save important information.

UpdateVertexaiMemoryTool(config: Optional[VertexaiMemoryToolConfig] = None, **kwargs)

Tool Name: update_vertexai_memory_tool
Description: Store a new memory fact in the VertexAI memory bank for the user
Parameters:

  • content (str): The memory content to store as a fact in the memory bank

Returns: Success status and message

Advanced Examples

Custom Client Configuration

from vertexai import Client

# Create custom client with specific settings
client = Client(
    project="my-project",
    location="us-central1"
)

memory = VertexaiMemory(config=config, client=client)

Async Context Manager

async with VertexaiMemory(config=config) as memory:
    await memory.add(content)
    results = await memory.query("query")
# Automatic cleanup

Multi-User Isolation

# User 1's memories
user1_config = VertexaiMemoryConfig(
    api_resource_name="projects/my-project/locations/us-central1/......................",
    project_id="my-project",
    location="us-central1",
    user_id="user1"
)
user1_memory = VertexaiMemory(config=user1_config)

# User 2's memories (isolated from User 1)
user2_config = VertexaiMemoryConfig(
    api_resource_name="projects/my-project/locations/us-central1/........................",
    project_id="my-project",
    location="us-central1",
    user_id="user2"
)
user2_memory = VertexaiMemory(config=user2_config)

Sharing Config Across Tools

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_vertexai_memory.tools import (
    SearchVertexaiMemoryTool,
    UpdateVertexaiMemoryTool,
    VertexaiMemoryToolConfig
)

# Create config once
config = VertexaiMemoryToolConfig(
    project_id="my-project",
    location="us-central1",
    user_id="user123",
    api_resource_name="projects/my-project/locations/us-c........................"
)

# Share across multiple tools
search_tool = SearchVertexaiMemoryTool(config=config)
update_tool = UpdateVertexaiMemoryTool(config=config)

# Use in multiple agents
agent1 = AssistantAgent(
    name="agent1",
    model_client=OpenAIChatCompletionClient(model="gpt-4"),
    tools=[search_tool, update_tool]
)

agent2 = AssistantAgent(
    name="agent2",
    model_client=OpenAIChatCompletionClient(model="gpt-4"),
    tools=[search_tool]  # This agent can only search, not update
)

# Both agents use the same VertexAI client and configuration

Development

Setup

# Clone repository
git clone https://github.com/thelaycon/autogen-vertexai-memory.git
cd autogen-vertexai-memory

# Install dependencies with Poetry
poetry install

# Run tests
poetry run pytest

# Run tests with coverage
poetry run pytest --cov=autogen_vertexai_memory --cov-report=html

# Type checking
poetry run mypy src/autogen_vertexai_memory

# Linting
poetry run ruff check src/

Project Structure

autogen-vertexai-memory/
โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ autogen_vertexai_memory/
โ”‚       โ”œโ”€โ”€ __init__.py
โ”‚       โ”œโ”€โ”€ memory/
โ”‚       โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚       โ”‚   โ””โ”€โ”€ _vertexai_memory.py    # Main memory implementation
โ”‚       โ””โ”€โ”€ tools/
โ”‚           โ”œโ”€โ”€ __init__.py
โ”‚           โ””โ”€โ”€ _vertexai_memory_tools.py  # Tool implementations
โ”œโ”€โ”€ tests/
โ”‚   โ”œโ”€โ”€ conftest.py
โ”‚   โ””โ”€โ”€ test_vertexai_memory.py
โ”œโ”€โ”€ pyproject.toml
โ””โ”€โ”€ README.md

Running Tests

The test suite uses mocking to avoid real VertexAI API calls:

# Run all tests
poetry run pytest

# Run with verbose output
poetry run pytest -v

# Run specific test class
poetry run pytest tests/test_vertexai_memory.py::TestVertexaiMemoryConfig

# Run with coverage report
poetry run pytest --cov=autogen_vertexai_memory --cov-report=term-missing

Troubleshooting

Authentication Issues

# Verify authentication
gcloud auth application-default print-access-token

# Set explicit credentials
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"

Empty Query Results

# Check if memories exist
all_memories = await memory.query(query="")
print(f"Total memories: {len(all_memories.results)}")

# Verify user_id matches
print(f"Using user_id: {memory.user_id}")

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes with tests
  4. Run tests (poetry run pytest)
  5. Commit your changes (git commit -m 'Add amazing feature')
  6. Push to the branch (git push origin feature/amazing-feature)
  7. Open a Pull Request

Development Guidelines

  • Write tests for new features
  • Follow existing code style
  • Update documentation for API changes
  • Ensure all tests pass before submitting PR

License

MIT License - see LICENSE file for details.

Support

Acknowledgments


Made with โค๏ธ for the Autogen community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autogen_vertexai_memory-0.1.16.tar.gz (13.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autogen_vertexai_memory-0.1.16-py3-none-any.whl (13.3 kB view details)

Uploaded Python 3

File details

Details for the file autogen_vertexai_memory-0.1.16.tar.gz.

File metadata

  • Download URL: autogen_vertexai_memory-0.1.16.tar.gz
  • Upload date:
  • Size: 13.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.3 Linux/6.6.87.2-microsoft-standard-WSL2

File hashes

Hashes for autogen_vertexai_memory-0.1.16.tar.gz
Algorithm Hash digest
SHA256 544ec1d5bba17cb3295ac502ad7a87d6d6916cfce322b45488e5e5b8217e20dd
MD5 ffc6ba64e5f2aa7f9f9d45a98f358de0
BLAKE2b-256 b4a87c928e4108db85ec7a0f162894e5f3521fe0badd26645e9ac27f71a3441b

See more details on using hashes here.

File details

Details for the file autogen_vertexai_memory-0.1.16-py3-none-any.whl.

File metadata

  • Download URL: autogen_vertexai_memory-0.1.16-py3-none-any.whl
  • Upload date:
  • Size: 13.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.3 Linux/6.6.87.2-microsoft-standard-WSL2

File hashes

Hashes for autogen_vertexai_memory-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 a758f4fdc780509bfcd6d49e72be1ceb2b3f809ae56e330a0a0aa023926004d4
MD5 6eadd35d1cfc464e6453aa96f505ba14
BLAKE2b-256 42ed5e34dff5cc319d14e4bd6b01cc5d506650bf44b71157c1c02ad282bf4cc1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page