Skip to main content

Enterprise-grade memory infrastructure for AI agents

Project description

MemWire

MemWire

Enterprise-grade, self-hosted AI memory infrastructure layer. Deploy persistent AI memory on-premise or in any cloud with your own LLM and database.

Self-Hosted Customizable Model-Agnostic License Discord

What is MemWire?

MemWire is an open source & enterprise-ready AI memory infrastructure layer. MemWire gives your AI applications persistent, auditable memory with structured, updatable facts, fastest semantic retrieval across conversations and knowledge using graph-based memory.

  • Fully customizable — adapt schemas, memory types, and pipelines to your use case
  • Self-hosted — run entirely on your local machine, on-premise or in your own cloud
  • Multi-tenant — isolate applications, users, and workspaces securely
  • Bring your own database — PostgreSQL pgvector, Qdrant, Pinecone, ChromaDB, Weawiate or your preferred stack
  • Bring your own LLM — OpenAI, Anthropic, Gemini, Ollama, or any provider
  • Deploy anywhere — edge, private cloud, public cloud, air-gapped environments
  • Knowledge ingestion — ingest documents (PDF, Excel, CSV, etc.) alongside conversation memory; recalled together at query time
  • Auditable — every memory is traceable, categorized (fact, preference, instruction, event, entity), and inspectable
  • Feedback loop — reinforce memory paths that led to good responses; unused edges decay over time

Quickstart

Python SDK

Install

pip install memwire

Embedded mode

Data is stored on disk in ./memwire_data/.

from memwire import MemWire, MemWireConfig

config = MemWireConfig(
    qdrant_path="./memwire_data",  # local vector store
    qdrant_collection_prefix="app_",
)
memory = MemWire(config=config)

USER_ID = "alice"

# Add messages to memory
records = memory.add(
    user_id=USER_ID,
    messages=[{"role": "user", "content": "I prefer dark mode and short answers."}],
)
for r in records:
    print(f"[stored] ({r.category}) {r.content}")

# Recall relevant context for a query
result = memory.recall("How should I format my answers?", user_id=USER_ID)
if result.formatted:
    print(result.formatted)
    # → "alice prefers dark mode and short answers."

# Inject recalled context into your LLM prompt
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
]
if result.formatted:
    messages.append(
        {"role": "system", "content": f"Memory context:\n{result.formatted}"}
    )
messages.append({"role": "user", "content": "How should I format my answers?"})

# After you get the LLM response, reinforce the memory paths that were used
memory.feedback(assistant_response="<assistant response here>", user_id=USER_ID)

# Search memories by keyword / semantic similarity
hits = memory.search("dark mode", user_id=USER_ID, top_k=5)
for record, score in hits:
    print(f"[{score:.2f}] ({record.category}) {record.content}")

# Inspect stats
stats = memory.get_stats(user_id=USER_ID)
print(stats)  # {"memories": 1, "nodes": ..., "edges": ..., "knowledge_bases": 0}

# Always close to flush background writes
memory.close()

With a local Qdrant server

docker run -p 6333:6333 qdrant/qdrant
config = MemWireConfig(
    qdrant_url="http://localhost:6333",
    qdrant_collection_prefix="app_",
)
memory = MemWire(config=config)

REST API

The api/ folder provides a self-hosted REST API backed by FastAPI and Qdrant.

Start the server

cd api
docker compose up --build   # Qdrant + MemWire API on :8000

Store memory

curl -X POST http://localhost:8000/v1/memories \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "alice",
    "app_id": "app_a",
    "workspace_id": "team_1",
    "messages": [
      { "role": "user", "content": "I prefer dark mode and short answers." }
    ]
  }'
[
  {
    "memory_id": "mem_3f7a1c2d9e4b",
    "user_id": "alice",
    "content": "I prefer dark mode and short answers.",
    "role": "user",
    "category": "preference",
    "strength": 1.0
  }
]

Recall context

curl -X POST http://localhost:8000/v1/memories/recall \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "alice",
    "app_id": "app_a",
    "workspace_id": "team_1",
    "query": "How should I format my answers?"
  }'
{
  "query": "How should I format my answers?",
  "supporting": [{ "tokens": ["dark", "mode"], "score": 0.87, "memories": [...] }],
  "conflicting": [],
  "knowledge": [],
  "formatted": "alice prefers dark mode and short answers.",
  "has_conflicts": false
}

Search memories

curl -X POST http://localhost:8000/v1/memories/search \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "alice",
    "app_id": "app_a",
    "workspace_id": "team_1",
    "query": "dark mode",
    "limit": 10
  }'
[
  {
    "memory": {
      "memory_id": "mem_3f7a1c2d9e4b",
      "content": "I prefer dark mode and short answers.",
      "category": "preference"
    },
    "score": 0.94
  }
]

See API Reference for configuration options and local development setup.

Customization

All MemWire behaviour is controlled through MemWireConfig. Choose your vector store, embedding model, and LLM provider, then tune recall and graph settings to fit your use case. Learn more.

Supported databases

Storage Type Status Notes
Qdrant Vector store ✅ Supported Embedded, local server, or Qdrant Cloud

Supported LLMs

MemWire is model-agnostic. Memory operations like storage, recall, and search work with any language model or provider.

Provider Example
OpenAI examples/openai/
Azure OpenAI examples/azure-openai/
Anthropic, Gemini, Ollama, or any other Pass the recalled context into any LLM

Roadmap

See ROADMAP.md for the full plan.

Contributing

PRs and issues are welcome. See CONTRIBUTING.md and GOVERNANCE.md.

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memwire-0.1.1.tar.gz (55.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memwire-0.1.1-py3-none-any.whl (48.1 kB view details)

Uploaded Python 3

File details

Details for the file memwire-0.1.1.tar.gz.

File metadata

  • Download URL: memwire-0.1.1.tar.gz
  • Upload date:
  • Size: 55.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memwire-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7640fd439061f0586d401e77bdb1f2d403e681f902f015483e89ba5839db5e85
MD5 7155b4f4c21ad40aaf0a1ed88ebafacc
BLAKE2b-256 dddb261000301b76a08d0f99ce1dd52289b8f9884a66e6b8797f9c1c83b4221e

See more details on using hashes here.

Provenance

The following attestation bundles were made for memwire-0.1.1.tar.gz:

Publisher: workflow.yml on memoryoss/memwire

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memwire-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: memwire-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 48.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memwire-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d6e1675052fd6e82a54557539fd16a6608b77423a8e6eb198ec8ac2f59a8c125
MD5 5bd29974baa8974b3df031fd0d87c070
BLAKE2b-256 d3e4b24786e219692273939f02810ce0a1404223a197da5ef641894f4a4c494f

See more details on using hashes here.

Provenance

The following attestation bundles were made for memwire-0.1.1-py3-none-any.whl:

Publisher: workflow.yml on memoryoss/memwire

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page