Skip to main content

Langgraph BaseStore for Vectorize to use with Langmem

Project description

langgraph-checkpoint-cloudflare-d1

Installation

pip install -U langmem-cloudflare-vectorize langgraph langchain-cloudflare

Usage

This package provides both synchronous and asynchronous interfaces for semantic vector search capabilities. Use this package when you want to use a custom store with langmem.

from langmem_cloudflare_vectorize import CloudflareVectorizeLangmemStore
from langchain_cloudflare.chat_models import ChatCloudflareWorkersAI
from langmem import create_manage_memory_tool, create_search_memory_tool
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
# Cloudflare credentials
account_id = "your-cloudflare-account-id"
vectorize_api_token = "your-vectorize-api-token"
workers_ai_token = "your-workers-ai-api-token"

# Create the langmem vectorize store
agent_store = CloudflareVectorizeLangmemStore.with_cloudflare_embeddings(
    account_id=account_id,
    index_name="cool-vectorize-index-name",
    vectorize_api_token=vectorize_api_token,
    workers_ai_token=workers_ai_token,
    embedding_model="@cf/baai/bge-base-en-v1.5",
    dimensions=768
)
# Create the llm
cloudflare_llm = ChatCloudflareWorkersAI(
    cloudflare_account_id=account_id,
    cloudflare_api_token=workers_ai_token,
    model="@cf/meta/llama-3.3-70b-instruct-fp8-fast"
)
# Create memory tools
manage_memory = create_manage_memory_tool(
    namespace=("memories",)
)
search_memory = create_search_memory_tool(
    namespace=("memories",)
)

@tool
def get_weather(location: str):
    """Get the current weather for a location."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy in San Francisco."
    elif location.lower() in ["ny", "new york"]:
        return "It's 45 degrees and sunny in New York."
    else:
        return f"It's 75 degrees and partly cloudy in {location}."


#  Create the agent
agent = create_react_agent(
    cloudflare_llm,
    tools=[
        manage_memory,
        search_memory,
    ],
    store=agent_store,  # This is how LangMem gets access to your store
)

config = {"configurable": {"thread_id": "test_session_1"}}

response1 = agent.invoke(
            {"messages": [{"role": "user",
                           "content": "Please remember this important information about me: My name is Sarah, I'm allergic to peanuts, and I love Italian food, especially pasta carbonara. Please use your manage_memory tool to store this."}]},
            config
        )
print(
    "User: Please remember this important information about me: My name is Sarah, I'm allergic to peanuts, and I love Italian food, especially pasta carbonara. Please use your manage_memory tool to store this.")
print(f"Agent: {response1['messages'][-1].content}")

# Test 2: Try to recall the stored information
print("\nCONVERSATION 2: THE MEMORY TEST")
print("-" * 40)
print("Testing if the agent remembers the stored information...")

response2 = agent.invoke(
    {"messages": [{"role": "user",
                   "content": "What do you remember about my dietary restrictions and food preferences? Please search your memory using search_memory tool."}]},
    config
)
print(
    "User: What do you remember about my dietary restrictions and food preferences? Please search your memory using search_memory tool.")
print(f"Agent: {response2['messages'][-1].content}")

# Test 3: Different topic, then back to memory
print("\n📅 CONVERSATION 3: Different topic")
print("-" * 40)

response3 = agent.invoke(
    {"messages": [{"role": "user",
                   "content": "What's the weather like in San Francisco?"}]},
    config
)
print("User: What's the weather like in San Francisco?")
print(f"Agent: {response3['messages'][-1].content}")

response4 = agent.invoke(
    {"messages": [{"role": "user",
                   "content": "What is my name and what am I allergic to?"}]},
    config
)
print(
    "User: What is my name and what am I allergic to?")
print(f"Agent: {response4['messages'][-1].content}")

Release Notes

v0.1.2` (2025-09-25)

  • Added support for environmental variables

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langmem_cloudflare_vectorize-0.1.0.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langmem_cloudflare_vectorize-0.1.0-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file langmem_cloudflare_vectorize-0.1.0.tar.gz.

File metadata

File hashes

Hashes for langmem_cloudflare_vectorize-0.1.0.tar.gz
Algorithm Hash digest
SHA256 757b8f82b4972b4a9f8dbf3c85567237711d406003eab62877621400ab8698f4
MD5 abeaab80f85237f79082df6dd7d048c0
BLAKE2b-256 e065c51d011874e9812df2b2f2285f3bf13be21c1ead57e007184f847ab6232b

See more details on using hashes here.

File details

Details for the file langmem_cloudflare_vectorize-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langmem_cloudflare_vectorize-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f8719e894d5a0e494805f3e3137d7c99a75f47b8a2669421c1db88374f916777
MD5 bc14a4e1c8bef0b0ead7f74ccf023fcd
BLAKE2b-256 12ca6e78fec8a9816d1cf9e59dd22921f0355cbe324721a5a72ffd3e8a81998b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page