Skip to main content

Enterprise-grade memory infrastructure for AI agents

Project description

MemWire

MemWire

Enterprise-grade, self-hosted AI memory infrastructure layer. Deploy persistent AI memory on-premise or in any cloud with your own LLM and database.

PyPI Self-Hosted Customizable Model-Agnostic License Discord

What is MemWire?

MemWire is an open source & enterprise-ready AI memory infrastructure layer. MemWire gives your AI applications persistent, auditable memory with structured, updatable facts, fastest semantic retrieval across conversations and knowledge using graph-based memory.

  • Fully customizable — adapt schemas, memory types, and pipelines to your use case
  • Self-hosted — run entirely on your local machine, on-premise or in your own cloud
  • Multi-tenant — isolate applications, users, and workspaces securely
  • Bring your own database — PostgreSQL pgvector, Qdrant, Pinecone, ChromaDB, Weawiate or your preferred stack
  • Bring your own LLM — OpenAI, Anthropic, Gemini, Ollama, or any provider
  • Deploy anywhere — edge, private cloud, public cloud, air-gapped environments
  • Knowledge ingestion — ingest documents (PDF, Excel, CSV, etc.) alongside conversation memory; recalled together at query time
  • Auditable — every memory is traceable, categorized (fact, preference, instruction, event, entity), and inspectable
  • Feedback loop — reinforce memory paths that led to good responses; unused edges decay over time

Quickstart

Python SDK

Install

pip install memwire

Embedded mode

Data is stored on disk in ./memwire_data/.

from memwire import MemWire, MemWireConfig

config = MemWireConfig(
    qdrant_path="./memwire_data",  # local vector store
    qdrant_collection_prefix="app_",
)
memory = MemWire(config=config)

USER_ID = "alice"

# Add messages to memory
records = memory.add(
    user_id=USER_ID,
    messages=[{"role": "user", "content": "I prefer dark mode and short answers."}],
)
for r in records:
    print(f"[stored] ({r.category}) {r.content}")

# Recall relevant context for a query
result = memory.recall("How should I format my answers?", user_id=USER_ID)
if result.formatted:
    print(result.formatted)
    # → "alice prefers dark mode and short answers."

# Inject recalled context into your LLM prompt
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
]
if result.formatted:
    messages.append(
        {"role": "system", "content": f"Memory context:\n{result.formatted}"}
    )
messages.append({"role": "user", "content": "How should I format my answers?"})

# After you get the LLM response, reinforce the memory paths that were used
memory.feedback(assistant_response="<assistant response here>", user_id=USER_ID)

# Search memories by keyword / semantic similarity
hits = memory.search("dark mode", user_id=USER_ID, top_k=5)
for record, score in hits:
    print(f"[{score:.2f}] ({record.category}) {record.content}")

# Inspect stats
stats = memory.get_stats(user_id=USER_ID)
print(stats)  # {"memories": 1, "nodes": ..., "edges": ..., "knowledge_bases": 0}

# Always close to flush background writes
memory.close()

With a local Qdrant server

docker run -p 6333:6333 qdrant/qdrant
config = MemWireConfig(
    qdrant_url="http://localhost:6333",
    qdrant_collection_prefix="app_",
)
memory = MemWire(config=config)

REST API

The api/ folder provides a self-hosted REST API backed by FastAPI and Qdrant.

Start the server

cd api
docker compose up --build   # Qdrant + MemWire API on :8000

Store memory

curl -X POST http://localhost:8000/v1/memories \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "alice",
    "app_id": "app_a",
    "workspace_id": "team_1",
    "messages": [
      { "role": "user", "content": "I prefer dark mode and short answers." }
    ]
  }'
[
  {
    "memory_id": "mem_3f7a1c2d9e4b",
    "user_id": "alice",
    "content": "I prefer dark mode and short answers.",
    "role": "user",
    "category": "preference",
    "strength": 1.0
  }
]

Recall context

curl -X POST http://localhost:8000/v1/memories/recall \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "alice",
    "app_id": "app_a",
    "workspace_id": "team_1",
    "query": "How should I format my answers?"
  }'
{
  "query": "How should I format my answers?",
  "supporting": [{ "tokens": ["dark", "mode"], "score": 0.87, "memories": [...] }],
  "conflicting": [],
  "knowledge": [],
  "formatted": "alice prefers dark mode and short answers.",
  "has_conflicts": false
}

Search memories

curl -X POST http://localhost:8000/v1/memories/search \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "alice",
    "app_id": "app_a",
    "workspace_id": "team_1",
    "query": "dark mode",
    "limit": 10
  }'
[
  {
    "memory": {
      "memory_id": "mem_3f7a1c2d9e4b",
      "content": "I prefer dark mode and short answers.",
      "category": "preference"
    },
    "score": 0.94
  }
]

See API Reference for configuration options and local development setup.

Customization

All MemWire behaviour is controlled through MemWireConfig. Choose your vector store, embedding model, and LLM provider, then tune recall and graph settings to fit your use case. Learn more.

Supported databases

Storage Type Status Notes
Qdrant Vector store ✅ Supported Embedded, local server, or Qdrant Cloud

Supported LLMs

MemWire is model-agnostic. Memory operations like storage, recall, and search work with any language model or provider.

Provider Example
OpenAI examples/openai/
Azure OpenAI examples/azure-openai/
Anthropic, Gemini, Ollama, or any other Pass the recalled context into any LLM

Roadmap

See ROADMAP.md for the full plan.

Contributing

PRs and issues are welcome. See CONTRIBUTING.md and GOVERNANCE.md.

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memwire-0.1.2.tar.gz (55.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memwire-0.1.2-py3-none-any.whl (48.2 kB view details)

Uploaded Python 3

File details

Details for the file memwire-0.1.2.tar.gz.

File metadata

  • Download URL: memwire-0.1.2.tar.gz
  • Upload date:
  • Size: 55.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memwire-0.1.2.tar.gz
Algorithm Hash digest
SHA256 497136aad427d228d6b0e3676200d7932bbea48b49ae33d7ec2fcd7c80f22504
MD5 ccd7baa00d928242774e6f2ce6255e97
BLAKE2b-256 5a29d3fe5ae7f59c5ce9c06cf708710f14771f206fd2744eb38dbc660823c4dd

See more details on using hashes here.

Provenance

The following attestation bundles were made for memwire-0.1.2.tar.gz:

Publisher: workflow.yml on memoryoss/memwire

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memwire-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: memwire-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 48.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memwire-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c04acc4812b799426c6c9e6a212c7ebbef0791eb26475ea83650554a91d970e2
MD5 27f7560c8595164465e898cb2ed038d5
BLAKE2b-256 c4dba8cfeeec432051187493a4414837f9578f38e2bc20682c1752e5eab2314d

See more details on using hashes here.

Provenance

The following attestation bundles were made for memwire-0.1.2-py3-none-any.whl:

Publisher: workflow.yml on memoryoss/memwire

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page