Skip to main content

The centralized knowledge hub for AI agents. Hybrid Vector + Graph memory with autonomous distillation.

Project description

Ripen: The "Trust Layer" for Multi-Agent AI Teams 🧠

The Centralized Knowledge Hub for AI-Driven Development Teams

License Commercial Status

🇯🇵 AIエージェント間の「暗黙知」を解消し、チーム開発における「知識の信頼性」を担保する、中央集権型・ローカルファーストの記憶インフラ。


The Problem

AI-driven development made your team 10x faster. But knowledge sharing didn't keep up.

  • Cursor knows your coding conventions — but Claude Code doesn't.
  • Gemini CLI resolved a critical bug yesterday — but Cursor forgot by today.
  • Your team decided on an architecture — but every AI tool proposes a different one.

The faster you ship, the faster your AI tools diverge. Design decisions scatter across isolated sessions. Architectural drift becomes invisible until it's too late.

This is "AI Multi-Personality Disorder" — and it's the hidden cost of high-velocity AI development.

The Solution

Ripen is a centralized, local-first MCP server that gives all your AI tools a single shared memory.

One server. Every tool reads from it. Every tool writes to it. Design decisions persist. Context survives. Your team's AI agents finally speak the same language.

graph LR
    subgraph "Your AI Tools"
        A1["🖥️ Cursor"]
        A2["⌨️ Claude Code"]
        A3["🔧 Gemini CLI"]
    end
    subgraph "Ripen"
        M["📋 Shared Blackboard"]
        M --> G["Logic Graph"]
        M --> B["Memory Bank"]
        M --> T["Thought Log"]
    end
    A1 <-->|MCP| M
    A2 <-->|MCP| M
    A3 <-->|MCP| M

Why This Works

1. Hybrid Intelligence Store

Layer What it stores Why it matters
Logic Graph Entities & relations ("Module X depends on Service Y") Preserves logical structure that RAG loses
Memory Bank Deep context as Markdown files Stores architectural blueprints, post-mortems, specs

2. Knowledge Lifecycle Management

  • Ripening: Frequently accessed knowledge is boosted as a long-term asset.
  • Decay & GC: Stale noise is automatically archived — your context stays high-signal.

3. Thought Distillation

Integrated with Sequential Thinking, the server captures reasoning processes, not just conclusions.

  • Salvage: Past decisions resurface exactly when an agent needs them.
  • Accretion: Each session's insights are distilled back into shared memory.

4. Built for Speed & Privacy

  • Compute-then-Write: AI processing runs outside DB transactions → <50ms lock time.
  • Local-First: SQLite + FAISS. Your proprietary context never leaves your machine.
  • Multi-Agent Auth: Secure your hub with API keys. Track exactly which agent (Cursor, Claude, or Gemini) contributed which piece of knowledge.
  • Zero Cloud Dependency: Ships with local fastembed — no external API required for core logic.

5. Transparency & Governance (Trust Layer)

  • Transparency Dashboard: Real-time audit logs and system health monitoring at /:port/history.
  • Human-in-the-Loop: Staging contradiction detection. Review and approve AI-suggested knowledge.
  • Audit Trails: Every memory save is logged with its author, enabling clear traceability for team-scale AI development.
  • Usage Guide: English / 日本語

Benchmarks: LongMemEval

We evaluate system performance using the LongMemEval suite, comparing Local-first vs. Cloud-based configurations.

Metric Local (FastEmbed + Ollama) Cloud (Gemini 2.0 Flash)
Search Latency 12ms 420ms
Context Recall (RAGAS) 0.95 0.96
Faithfulness (RAGAS) 0.92 0.98
Independence 100% Local Cloud Dependency

[!NOTE] For high-frequency coding sessions, the Local-first setup is recommended due to its near-zero latency, allowing AI agents to "think" without waiting for API responses.

Quick Start

# Install
uv pip install -e .

# Run as a shared hub (SSE mode — recommended for multi-tool setups)
uv run shared-memory --sse --port 8377

Then point your MCP-compatible tools (Cursor, Claude Code, Gemini CLI) at http://localhost:8377.

LLM Setup (Required for Distillation)

To enable knowledge distillation and automated insights, you need at least one LLM provider configured:

  • Option A: Google Gemini (Recommended)

    • Set the GOOGLE_API_KEY environment variable.
    • The server will automatically detect and use Gemini if the key is present.
  • Option B: Local Ollama

    • Install Ollama.
    • Run the following command to download the required model:
      ollama pull llama3.1
      
    • Ensure Ollama is running (ollama serve) before starting the server.

If neither is configured, core graph operations will still work, but automated knowledge extraction from thoughts will be disabled.

Authentication Setup

To enable secure multi-agent access:

  1. Create data/auth.json with your credentials:
{
  "cursor_user": "your-secret-key-1",
  "gemini_agent": "your-secret-key-2"
}
  1. In your tool configuration (e.g., mcp_config.json), inject the key via environment variables:
"Ripen": {
  "command": "npx",
  "args": ["-y", "mcp-remote", "http://localhost:8377/sse"],
  "env": {
    "SHARED_MEMORY_API_KEY": "your-secret-key-1",
    "SHARED_MEMORY_ACCOUNT": "cursor_user"
  }
}

Governance & Licensing

  • Open Source: AGPL-3.0 — free for personal and open-source use.
  • Commercial: For proprietary integrations or SaaS, a Commercial License is available.

🇯🇵 日本語

AI駆動開発が速くなりすぎて、「情報共有」が壊れていませんか?

AI駆動開発によって開発速度は圧倒的に向上しました。 しかしその結果、チーム内の情報共有が追いつかなくなるという新しい問題が生まれています。

  • Cursorに教えた設計ルールを、Gemini CLIは知らない
  • 昨日Claudeと決めた方針を、今日のCursorは忘れている
  • プロジェクトが大きくなるほど、AIごとの「常識のズレ」が拡大する

これが**「AIの多重人格障害」**です。

解決策:中央集権型のナレッジ共有MCPサーバー

Ripen は、すべてのAIツールが同じ「黒板(ブラックボード)」を読み書きできる、ローカルファーストの共有メモリサーバーです。

一度教えた設計思想は、どのAIツールからでも参照できます。チームの暗黙知を、AIが忘れない場所に置く。それがこのプロジェクトの目的です。

個人での複数AIエージェント間の知識共有にも使えます。

詳しい設計思想は 概念的要件定義書 をご覧ください。


Ripen: Making AI agents remember what your team already decided.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ripen-0.1.0.tar.gz (311.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ripen-0.1.0-py3-none-any.whl (92.6 kB view details)

Uploaded Python 3

File details

Details for the file ripen-0.1.0.tar.gz.

File metadata

  • Download URL: ripen-0.1.0.tar.gz
  • Upload date:
  • Size: 311.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for ripen-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d756d118a1d0c8dfeff71897d43ef8ea66be486f6d1dbba28119d6f33d9a573a
MD5 ec1a5447731734b27bf4156ee42d28cd
BLAKE2b-256 e60ec104b2ac7c86feace4eb8ff3c8de11bec845cf890b972ac96a2e85402fee

See more details on using hashes here.

File details

Details for the file ripen-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ripen-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 92.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for ripen-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 adf1f35d7b371332cfc4c788c8d6f8c7c0b9a0eb52fa51d70371f5d47845224b
MD5 cd3fb9d3048e0907608c563b37b46491
BLAKE2b-256 73ec793a867d454f90191db91019f911cfafe1b4c6d54eb1a8454968430f1b0c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page