Skip to main content

Transform chat history into structured, navigable memory graphs

Project description

🧠 Rebrain

Transform chat history into structured, personalized AI memory.

Rebrain processes your ChatGPT conversations through a 5-step pipeline, extracting observations, synthesizing learnings and cognitions, then building a user persona for hyper-personalized AI interactions.


🚀 Quick Start (Recommended)

Using UV - Zero Setup Required

# 1. Install UV (one-time)
curl -LsSf https://astral.sh/uv/install.sh | sh

# 2. Set your API key
export GEMINI_API_KEY=your_key_here

# 3. Process your conversations (place conversations.json in current directory)
uvx rebrain pipeline run

# 4. Start MCP server (auto-loads processed data)
uvx rebrain mcp --port 9999

# Advanced: Custom paths and config
uvx rebrain pipeline run --input /path/to/conversations.json --data-path ./my-data
uvx rebrain pipeline run --config custom.yaml  # Custom clustering parameters
uvx rebrain mcp --data-path ./my-data --port 9999 --user-id myproject

That's it! No Python installation, no virtual environments, no dependencies to manage.

See INSTALL.md for detailed installation options.


🎯 For Developers

# Clone and setup
git clone https://github.com/yasinsb/rebrain.git
cd rebrain
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp env.template .env  # Add your GEMINI_API_KEY

# Run pipeline (bash CLI)
scripts/pipeline/cli.sh all

# Or step by step
scripts/pipeline/cli.sh step1  # Transform & filter
scripts/pipeline/cli.sh step2  # Extract & cluster observations
scripts/pipeline/cli.sh step3  # Synthesize learnings
scripts/pipeline/cli.sh step4  # Synthesize cognitions
scripts/pipeline/cli.sh step5  # Build persona

# Load into memg-core
python scripts/load_memg.py

Output: data/persona/persona.md - ready for system prompts!


🤖 MCP Integration (Claude Desktop / Cursor)

HTTP Mode (Recommended for Stability)

Start the server:

# Start with default user_id="rebrain"
uvx --from rebrain rebrain-mcp --data-path ./data --port 9999

# Or use custom user_id for multi-user setups
uvx --from rebrain rebrain-mcp --data-path ./data --port 9999 --user-id myproject

# Using rebrain CLI (equivalent)
uvx rebrain mcp --data-path ./data --port 9999

Add to ~/.cursor/mcp.json or Claude Desktop config:

{
  "mcpServers": {
    "rebrain": {
      "url": "http://localhost:9999/mcp"
    }
  }
}

Direct Mode (stdio)

⚠️ Known Issue: stdio mode has stability issues with Cursor/Claude Desktop. HTTP mode is recommended.

If you still want to try stdio mode:

{
  "mcpServers": {
    "rebrain": {
      "command": "uvx",
      "args": ["--from", "rebrain", "rebrain-mcp", "--data-path", "/absolute/path/to/your/data"],
      "env": {
        "GEMINI_API_KEY": "your_key_here"
      }
    }
  }
}

User ID Configuration

The MCP server now supports configurable user_id for memory isolation:

  • Default: user_id="rebrain" - used if not specified
  • Custom: Pass --user-id myproject when starting the server
  • Multi-user: Each user_id maintains separate memory space
  • Agent Usage: Agents don't need to provide user_id (uses server default)

Benefits:

  • 💰 Process once (~$0.10-0.20), query forever for free (local memg-core)
  • Instant restarts - database persists, no reprocessing
  • 🔒 100% local - no ongoing API costs, no cloud lock-in

Quick Start (Legacy)

1. Setup

git clone <repo-url>
cd rebrain

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Configure
cp env.template .env
# Edit .env: add GEMINI_API_KEY

2. Prepare Data

Export your ChatGPT conversations and place the JSON file at:

data/raw/conversations.json

3. Run Pipeline

# Full pipeline (5 steps)
scripts/pipeline/cli.sh all

# Or run individual steps
scripts/pipeline/cli.sh step1
scripts/pipeline/cli.sh step2
# ... etc

4. Check Results

# View pipeline status
scripts/pipeline/cli.sh status

# Read your persona
cat data/persona/persona.md

Pipeline Overview

Rebrain uses a 5-stage synthesis pipeline:

Raw Conversations (JSON export)
    ↓ Step 1: Transform & Filter
Clean Conversations (date filtered, code removed)
    ↓ Step 2: Extract & Cluster Observations (AI + K-Means)
Clustered Observations (~100 clusters)
    ↓ Step 3: Synthesize Learnings (AI + K-Means)
Clustered Learnings (~20 clusters)
    ↓ Step 4: Synthesize Cognitions (AI)
High-Level Cognitions (~20 patterns)
    ↓ Step 5: Build Persona (AI)
User Persona (3 plain text sections)

Key Features:

  • Privacy-First: Category-specific filtering at observation extraction
  • Adaptive Clustering: Finds local optima with tolerance-based K-Means
  • Flexible Models: Override per-task via prompt template metadata
  • Provenance Tracking: Full lineage from conversation → observation → learning → cognition
  • Dual Output: JSON (structured) + Markdown (human-readable)

Configuration

All pipeline parameters live in config/pipeline.yaml:

ingestion:
  date_cutoff_days: 180
  remove_code_blocks: true

insight_extraction:
  max_concurrent: 20
  batch_size: 40

learning_clustering:
  target_clusters: 20
  tolerance: 0.2

Model selection via prompt templates:

# rebrain/prompts/templates/persona_synthesis.yaml
metadata:
  model_recommendation: "gemini-2.5-flash"

See config/README.md for details.


CLI Usage

# Check what's been generated
./cli.sh status

# Run individual steps
./cli.sh step1 -i data/raw/my_convos.json
./cli.sh step2 --cluster-only  # Re-cluster existing insights
./cli.sh step3

# Clean outputs
./cli.sh clean --all

# Full help
./cli.sh help

See scripts/pipeline/README.md for details.


Project Structure

rebrain/
├── rebrain/              # Core library
│   ├── core/            # GenAI client
│   ├── ingestion/       # Data loading & chunking
│   ├── operations/      # Embedder, clusterer, synthesizer
│   ├── prompts/         # Prompt templates (YAML)
│   ├── retrieval/       # Query interface (future)
│   └── schemas/         # Pydantic models
├── config/              # Pipeline configuration
├── scripts/pipeline/    # 5-step pipeline + CLI
├── data/               # Raw → processed → persona
└── notebooks/          # Exploration & testing

Output

Persona (Step 5)

JSON (data/persona/persona.json):

{
  "model": "gemini-2.5-flash",
  "persona": {
    "personal_profile": "...",
    "communication_preferences": "...",
    "professional_profile": "..."
  }
}

Markdown (data/persona/persona.md):

# User Persona Information for AI

## Personal Profile
...

## Communication Preferences
...

## Professional Profile
...

Copy-paste ready for system prompts!


Development

# Install dev dependencies
pip install -r requirements_dev.txt

# Run with custom config
python scripts/pipeline/01_transform_filter.py --input data/raw/test.json

# Check specific step
python scripts/pipeline/02_extract_cluster_insights.py --skip-cluster

Documentation

  • Pipeline Details: scripts/pipeline/README.md
  • Configuration: config/README.md
  • Data Structure: data/README.md
  • Model Override Pattern: MODEL_OVERRIDE_PATTERN.md
  • Persona Builder: PERSONA_BUILDER_REFACTOR.md

License

MIT License - see LICENSE file


Built by Yasin Salimibeni

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rebrain-0.1.3.tar.gz (73.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rebrain-0.1.3-py3-none-any.whl (91.6 kB view details)

Uploaded Python 3

File details

Details for the file rebrain-0.1.3.tar.gz.

File metadata

  • Download URL: rebrain-0.1.3.tar.gz
  • Upload date:
  • Size: 73.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rebrain-0.1.3.tar.gz
Algorithm Hash digest
SHA256 c6d82318bb7f818f43417cba912e364ae305871c83c36fac2ebb6a30f7dbe951
MD5 264bfc26c78f6faab004f204847ba2f5
BLAKE2b-256 c652ab923a2e57312178c8d002febaf40c919f6d845a8128c68855e6803e1408

See more details on using hashes here.

Provenance

The following attestation bundles were made for rebrain-0.1.3.tar.gz:

Publisher: publish.yml on yasinsb/rebrain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rebrain-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: rebrain-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 91.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rebrain-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 63a9bc8e7806f3dd004ae74be75e370e43ec6ab6380f57ffca6f0d5156fc67ae
MD5 779cb3cc14bb65f846b95fda3263f7c1
BLAKE2b-256 f0e6fbbdc9352357907092b40209dc834da5869f73cf8e84ddf8a12f0c262001

See more details on using hashes here.

Provenance

The following attestation bundles were made for rebrain-0.1.3-py3-none-any.whl:

Publisher: publish.yml on yasinsb/rebrain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page