Skip to main content

MCP server for iGenius Memory — gives AI agents persistent memory tools via the hosted API

Project description

iGenius Memory — Persistent AI Memory for Any Agent

PyPI Python VS Code Marketplace License: MIT GitHub

A structured, AI-powered memory backend that gives any MCP-compatible agent persistent memory via the iGenius Memory service. All AI processing happens server-side — you just need an API key.

iGenius Memory


3 Ways to Use iGenius

Client Install Best For
🧩 VS Code Extension Marketplace Full sidebar UI, memory browser, AI provider settings
⚡ MCP Server pip install igenius-mcp Any MCP client — VS Code, Claude Desktop, Cursor, Windsurf
🖥️ Desktop App Windows Installer Standalone system-tray app, works with any editor

Get a free API key at igenius-memory.online — all three clients use the same key.


1. VS Code Extension (Marketplace)

Install directly from the VS Code Marketplace — no pip, no config files:

ext install igenius-memory.igenius-memory

Or search "iGenius Memory" in the Extensions panel. Includes sidebar UI, memory browser, status bar indicator, AI provider settings, and auto-warms briefings on a configurable interval.

2. MCP Server (pip)

For any MCP-compatible client (VS Code Copilot, Claude Desktop, Cursor, Windsurf, etc.):

pip install igenius-mcp

Then add to your MCP config:

VS Code~/.vscode/mcp.json:

{
  "servers": {
    "igenius-memory": {
      "command": "igenius-mcp",
      "env": { "IGENIUS_API_KEY": "ig_your_key_here" },
      "type": "stdio"
    }
  }
}

Claude Desktop / Cursor / Windsurf — add to your MCP config file:

{
  "mcpServers": {
    "igenius-memory": {
      "command": "python",
      "args": ["-m", "igenius_mcp.server"],
      "env": { "IGENIUS_API_KEY": "ig_your_key_here" }
    }
  }
}

⚠️ Windows users: If VS Code can't find igenius-mcp, use python -m igenius_mcp.server instead.

3. Desktop App (Windows)

Standalone system-tray application — works alongside any editor or IDE:

  • Download Installer (NSIS setup or MSI)
  • Built with Tauri + Rust — lightweight, native, ~5 MB
  • System tray with quick access to briefings, search, and memory stats
  • Configure LLM provider (LM Studio, OpenAI, Anthropic, Google) from the UI

Restart VS Code after installing the extension or adding MCP config — all 17 memory tools become available to Copilot and any MCP-compatible agent.

Available Tools

Tool Description
memory_briefing Session briefing from all memory layers (call FIRST)
memory_ingest Ingest user/agent messages for AI extraction
memory_consolidate Merge accumulated extracts into master briefing
memory_process Detect trigger words and auto-classify text
memory_store Direct store to a specific memory layer
memory_search Natural language search across memories
memory_recall Retrieve all persistent session memories
memory_summarize LLM-powered summary of a memory layer
memory_delete Delete a memory by ID
memory_update Update fields on an existing memory
memory_review List short-term memories for triage
memory_promote Promote short-term → long-term
memory_pin Pin a fact permanently (user-confirmed, never expires)
memory_triggers_list List trigger words and their layers
memory_triggers_add Add a new trigger word
visual_report Render URL → screenshot → vision analysis → full UI/UX report (requires [visual])
visual_screenshot Render URL → return base64 PNG (requires [visual])

LLM Requirements

iGenius uses an LLM backend for AI extraction, consolidation, and (optionally) visual analysis. You can use a local or remote LLM provider.

Local Setup (LM Studio, Ollama, etc.)

Requirement Minimum
GPU VRAM 6 GB+
Recommended model Qwen 3.5 4B (non-thinking) or equivalent
Context window 3,000+ tokens

⚠️ IMPORTANT: Do NOT use thinking/reasoning models (e.g. QwQ, DeepSeek R1, o1, o3). Thinking models emit <think> chains before the actual response, which breaks iGenius's structured JSON extraction pipeline. Only use standard non-thinking (instruct/chat) models.

Why these specs? iGenius sends structured extraction prompts that expect clean JSON output. A 4B-parameter non-thinking model at 3k context is the sweet spot for fast, accurate extraction without hallucination or timeouts. Larger models (8B+) work too — just ensure you have the VRAM headroom and that the model is a non-thinking variant.

Remote Setup (OpenAI, Anthropic, Google, etc.)

No local hardware requirements. Any API-accessible model works — configure the provider, model name, and API key in the VS Code extension settings or environment variables.

Environment Variables

Variable Required Default
IGENIUS_API_KEY Yes
IGENIUS_API_URL No https://igenius-memory.online/v1

Visual Tools (Optional)

Give your AI agent eyes — render any URL, take a pixel-perfect screenshot, and get instant UI/UX analysis from a local vision model.

Install

pip install "igenius-mcp[visual]"
python -m playwright install chromium

Then load a vision-capable model in LM Studio (e.g. Qwen 3.5 9B Vision, non-thinking).

⚠️ Do NOT use thinking/reasoning vision models — same restriction as above.

Visual MCP Tools

Tool Description
visual_report Render URL → screenshot → vision analysis → full UI/UX report
visual_screenshot Render URL → return base64-encoded PNG (no analysis)

Visual Environment Variables

Variable Default Description
IGENIUS_VISION_URL http://localhost:1234/v1 Vision model API endpoint
IGENIUS_VISION_MODEL auto-detect Override the vision model name
IGENIUS_VISION_KEY API key for vision endpoint (e.g. LM Studio auth token)
IGENIUS_VIEWPORT_W 1280 Screenshot viewport width
IGENIUS_VIEWPORT_H 800 Screenshot viewport height

100% local — screenshots and analysis never leave your machine.

Agent Instructions

For best results, add the iGenius agent instructions to your workspace:

  • VS Code: Place igenius.instructions.md in ~/.vscode/prompts/
  • Claude Code: Add to CLAUDE.md
  • Workspace: Add to .github/copilot-instructions.md

Get the template at igenius-memory.info

How It Works

Agent ←→ MCP (stdio) ←→ igenius-mcp ←→ REST API ←→ iGenius Backend

The memory tools are a thin proxy — they translate MCP tool calls into REST API requests. All AI extraction, LLM summarization, and encryption happens server-side.

The visual tools run locally — Playwright renders URLs on your machine and a local vision model (e.g. LM Studio + Qwen2.5-VL) analyzes the screenshots. Screenshots and analysis never leave your machine.

Plans

Plan Price Requests API Keys IPs/Key
Starter Free 1,000/week 1 3
Pro $19/mo 50,000/day 5 10
Enterprise Contact 500,000/day 20 50

Details at igenius-memory.store

Coming Soon

iGenius Context Engine — unlimited effective context for local LLMs through intelligent recursive summarization. Run a 3B model with a 4K context window and handle conversations of any length.

Links

Support the Project

iGenius Memory is built and maintained by NovaMind Labs. If you find it useful, here's how you can help:

  • Star the repo — it helps more developers discover iGenius
  • Upgrade to Pro — $19/mo directly funds development → igenius-memory.store
  • Report bugs & ideasopen an issue
  • Spread the word — tell your friends, tweet about it, write a blog post

Every user, star, and subscription helps keep iGenius alive and improving. Thank you!

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

igenius_mcp-0.5.3.tar.gz (15.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

igenius_mcp-0.5.3-py3-none-any.whl (16.8 kB view details)

Uploaded Python 3

File details

Details for the file igenius_mcp-0.5.3.tar.gz.

File metadata

  • Download URL: igenius_mcp-0.5.3.tar.gz
  • Upload date:
  • Size: 15.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for igenius_mcp-0.5.3.tar.gz
Algorithm Hash digest
SHA256 724d95ab3b65ff2f61c8769aac122e62ae75b73475b5d74c2b1bf089d5346759
MD5 42636bb39b65cddbca4626c26108365a
BLAKE2b-256 2ea2c4e7d26cc41c045b8d6d212eae4cb0b26be7dc1b81b3704e6ecd93bffa7d

See more details on using hashes here.

Provenance

The following attestation bundles were made for igenius_mcp-0.5.3.tar.gz:

Publisher: publish.yml on vehoelite/igenius-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file igenius_mcp-0.5.3-py3-none-any.whl.

File metadata

  • Download URL: igenius_mcp-0.5.3-py3-none-any.whl
  • Upload date:
  • Size: 16.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for igenius_mcp-0.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 235966faa53f8be89a5d1dbb86d3c26285b4d0c871b9994893848f20167720ea
MD5 7117b2529ae9ffe550e35ba4df38cf5d
BLAKE2b-256 53302faa48f46911a2c7fd7a7dbc99cbb84b90c775deecde86df75b6877bb2e9

See more details on using hashes here.

Provenance

The following attestation bundles were made for igenius_mcp-0.5.3-py3-none-any.whl:

Publisher: publish.yml on vehoelite/igenius-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page