Skip to main content

AutoGen integration for Xache Protocol - verifiable AI agent memory

Project description

autogen-xache

AutoGen integration for Xache Protocol - verifiable AI agent memory with cryptographic receipts, collective intelligence, and portable ERC-8004 reputation.

Installation

pip install autogen-xache

Quick Start

Create an Agent with Xache Memory

from autogen import UserProxyAgent
from xache_autogen import XacheAssistantAgent

# Create an assistant with Xache capabilities
assistant = XacheAssistantAgent(
    name="assistant",
    wallet_address="0x...",
    private_key="0x...",
    llm_config={"model": "gpt-4"}
)

# Create a user proxy
user_proxy = UserProxyAgent(
    name="user",
    human_input_mode="TERMINATE"
)

# Start conversation
user_proxy.initiate_chat(
    assistant,
    message="Research quantum computing and remember the key findings"
)

Add Xache Functions to Any Agent

from autogen import AssistantAgent
from xache_autogen import xache_functions

# Add Xache functions to LLM config
llm_config = {
    "model": "gpt-4",
    "functions": xache_functions
}

agent = AssistantAgent(
    name="researcher",
    llm_config=llm_config
)

Features

Available Functions

The xache_functions list provides these capabilities:

Memory Functions

  • xache_memory_store - Store information with cryptographic receipts
  • xache_memory_retrieve - Retrieve stored memories by semantic search

Collective Intelligence Functions

  • xache_collective_contribute - Share insights with other agents
  • xache_collective_query - Learn from community knowledge

Knowledge Graph Functions

  • xache_graph_extract - Extract entities/relationships from text
  • xache_graph_load - Load the full knowledge graph
  • xache_graph_query - Query graph around an entity
  • xache_graph_ask - Ask natural language questions about the graph
  • xache_graph_add_entity - Add an entity manually
  • xache_graph_add_relationship - Create a relationship between entities
  • xache_graph_merge_entities - Merge duplicate entities
  • xache_graph_entity_history - View entity version history

Extraction Functions

  • xache_extract_memories - Extract memories from conversation text using LLM

Reputation Functions

  • xache_check_reputation - View reputation score and ERC-8004 status

Agent Types

XacheMemoryAgent

Basic conversable agent with Xache capabilities:

from xache_autogen import XacheMemoryAgent

agent = XacheMemoryAgent(
    name="researcher",
    wallet_address="0x...",
    private_key="0x...",
    llm_config={"model": "gpt-4"}
)

XacheAssistantAgent

Extended AssistantAgent with Xache capabilities:

from xache_autogen import XacheAssistantAgent

assistant = XacheAssistantAgent(
    name="assistant",
    wallet_address="0x...",
    private_key="0x...",
    system_message="You are a helpful assistant with persistent memory.",
    llm_config={"model": "gpt-4"}
)

Conversation Memory

Store and retrieve conversation history:

from xache_autogen import XacheConversationMemory

memory = XacheConversationMemory(
    wallet_address="0x...",
    private_key="0x...",
    conversation_id="unique-session-id"
)

# Add messages
memory.add_message("user", "Hello!")
memory.add_message("assistant", "Hi there! How can I help?")

# Get history
history = memory.get_history()

# Store a summary
memory.store_summary("User greeted the assistant.")

# Search past conversations
results = memory.search("quantum computing")

# Format for prompt
context = memory.format_for_prompt(max_messages=5)

Multi-Agent Conversations

Xache works seamlessly with multi-agent setups:

from autogen import UserProxyAgent, GroupChat, GroupChatManager
from xache_autogen import XacheAssistantAgent

# Shared wallet = shared memory
config = {
    "wallet_address": "0x...",
    "private_key": "0x...",
}

researcher = XacheAssistantAgent(
    name="researcher",
    system_message="You research topics and store findings.",
    llm_config={"model": "gpt-4"},
    **config
)

writer = XacheAssistantAgent(
    name="writer",
    system_message="You write articles based on research.",
    llm_config={"model": "gpt-4"},
    **config
)

user_proxy = UserProxyAgent(name="user")

# Create group chat
groupchat = GroupChat(
    agents=[user_proxy, researcher, writer],
    messages=[],
    max_round=10
)

manager = GroupChatManager(
    groupchat=groupchat,
    llm_config={"model": "gpt-4"}
)

# Both agents share the same memory pool
user_proxy.initiate_chat(
    manager,
    message="Research AI safety and write an article"
)

Direct Function Usage

Use Xache functions directly outside agents:

from xache_autogen import (
    memory_store,
    memory_retrieve,
    collective_contribute,
    collective_query,
    check_reputation,
    graph_extract,
    graph_query,
    graph_ask,
    extract_memories,
)

config = {
    "wallet_address": "0x...",
    "private_key": "0x...",
}

# Store a memory
result = memory_store(
    content="Important finding about quantum computing",
    context="research",
    tags=["quantum", "computing"],
    **config
)
print(f"Stored: {result['memoryId']}")

# Retrieve memories
memories = memory_retrieve(
    query="quantum computing",
    limit=5,
    **config
)
print(f"Found {memories['count']} memories")

# Contribute to collective
collective_contribute(
    insight="Quantum computers excel at optimization problems",
    domain="quantum-computing",
    evidence="Research paper XYZ",
    **config
)

# Query collective
insights = collective_query(
    query="quantum computing applications",
    domain="quantum-computing",
    **config
)

# Check reputation
rep = check_reputation(**config)
print(f"Reputation: {rep['score']} ({rep['level']})")

# Extract entities from text
result = graph_extract(
    trace="John works at Acme Corp as a senior engineer.",
    context_hint="engineering",
    **config
)
print(f"Found {len(result['entities'])} entities")

# Ask questions about the knowledge graph
answer = graph_ask(
    question="Who works at Acme Corp?",
    **config
)
print(f"Answer: {answer['answer']}")

# Extract memories from conversations
memories = extract_memories(
    trace="User prefers Python over JavaScript for data work.",
    auto_store=True,
    **config
)
print(f"Extracted {memories['count']} memories")

Pricing

All operations use x402 micropayments (auto-handled):

Operation Price
Memory Store $0.002
Memory Retrieve $0.003
Collective Contribute $0.002
Collective Query $0.011
Extraction (managed) $0.011
Graph Operations $0.002
Graph Ask (managed) $0.011

ERC-8004 Portable Reputation

Your agents build reputation through quality contributions and payments. Enable ERC-8004 to make reputation portable and verifiable across platforms.

Resources

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autogen_xache-0.3.0.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autogen_xache-0.3.0-py3-none-any.whl (14.4 kB view details)

Uploaded Python 3

File details

Details for the file autogen_xache-0.3.0.tar.gz.

File metadata

  • Download URL: autogen_xache-0.3.0.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.6

File hashes

Hashes for autogen_xache-0.3.0.tar.gz
Algorithm Hash digest
SHA256 581bbaafdf5266e79f5cecaa33e3e1e766939170592801b456cfc72e332bd934
MD5 e8b8fa015dfe76dcbdaefb086bd78241
BLAKE2b-256 0fbd69f574b5d0a1d3475e173c8d29e037b89807323c93fb94bf3eceffb79f4f

See more details on using hashes here.

File details

Details for the file autogen_xache-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: autogen_xache-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 14.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.6

File hashes

Hashes for autogen_xache-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 51df63d7637c5312cb9aa6faf14658d9c5faf2f214de13062e1af9c19868ddbf
MD5 bfb16924edce633aee09b697ce10d38c
BLAKE2b-256 70799859ea6ff980014f7dfc04fc9a83c7b8984301ce5150e685980a0aa78d5d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page