Skip to main content

MCP server for iGenius Memory — gives AI agents persistent memory tools via the hosted API

Project description

iGenius MCP — Thin Memory Server for VS Code

PyPI Python License: MIT GitHub

A lightweight MCP (Model Context Protocol) server that gives AI agents persistent memory via the iGenius Memory service.

Core memory tools require only an API key — all AI processing happens server-side. Optional visual tools add local Playwright rendering and vision-model analysis.

iGenius Memory

Quick Start

pip install igenius-mcp

Set your API key and run:

export IGENIUS_API_KEY=ig_your_key_here
igenius-mcp

Get a free key at igenius-memory.online

VS Code Setup

Add to ~/.vscode/mcp.json:

{
  "servers": {
    "igenius-memory": {
      "command": "igenius-mcp",
      "env": { "IGENIUS_API_KEY": "ig_your_key_here" },
      "type": "stdio"
    }
  }
}

Restart VS Code — all 17 memory tools are now available to Copilot and any MCP-compatible agent.

Available Tools

Tool Description
memory_briefing Session briefing from all memory layers (call FIRST)
memory_ingest Ingest user/agent messages for AI extraction
memory_consolidate Merge accumulated extracts into master briefing
memory_process Detect trigger words and auto-classify text
memory_store Direct store to a specific memory layer
memory_search Natural language search across memories
memory_recall Retrieve all persistent session memories
memory_summarize LLM-powered summary of a memory layer
memory_delete Delete a memory by ID
memory_update Update fields on an existing memory
memory_review List short-term memories for triage
memory_promote Promote short-term → long-term
memory_pin Pin a fact permanently (user-confirmed, never expires)
memory_triggers_list List trigger words and their layers
memory_triggers_add Add a new trigger word

LLM Requirements

iGenius uses an LLM backend for AI extraction, consolidation, and (optionally) visual analysis. You can use a local or remote LLM provider.

Local Setup (LM Studio, Ollama, etc.)

Requirement Minimum
GPU VRAM 6 GB+
Recommended model Qwen 3.5 4B (non-thinking) or equivalent
Context window 3,000+ tokens

⚠️ IMPORTANT: Do NOT use thinking/reasoning models (e.g. QwQ, DeepSeek R1, o1, o3). Thinking models emit <think> chains before the actual response, which breaks iGenius's structured JSON extraction pipeline. Only use standard non-thinking (instruct/chat) models.

Why these specs? iGenius sends structured extraction prompts that expect clean JSON output. A 4B-parameter non-thinking model at 3k context is the sweet spot for fast, accurate extraction without hallucination or timeouts. Larger models (8B+) work too — just ensure you have the VRAM headroom and that the model is a non-thinking variant.

Remote Setup (OpenAI, Anthropic, Google, etc.)

No local hardware requirements. Any API-accessible model works — configure the provider, model name, and API key in the VS Code extension settings or environment variables.

Environment Variables

Variable Required Default
IGENIUS_API_KEY Yes
IGENIUS_API_URL No https://igenius-memory.online/v1

Visual Tools (Optional)

Give your AI agent eyes — render any URL, take a pixel-perfect screenshot, and get instant UI/UX analysis from a local vision model.

Install

pip install "igenius-mcp[visual]"
python -m playwright install chromium

Then load a vision-capable model in LM Studio (e.g. Qwen 3.5 9B Vision, non-thinking).

⚠️ Do NOT use thinking/reasoning vision models — same restriction as above.

Visual MCP Tools

Tool Description
visual_report Render URL → screenshot → vision analysis → full UI/UX report
visual_screenshot Render URL → return base64-encoded PNG (no analysis)

Visual Environment Variables

Variable Default Description
IGENIUS_VISION_URL http://localhost:1234/v1 Vision model API endpoint
IGENIUS_VISION_MODEL auto-detect Override the vision model name
IGENIUS_VIEWPORT_W 1280 Screenshot viewport width
IGENIUS_VIEWPORT_H 800 Screenshot viewport height

100% local — screenshots and analysis never leave your machine.

Agent Instructions

For best results, add the iGenius agent instructions to your workspace:

  • VS Code: Place igenius.instructions.md in ~/.vscode/prompts/
  • Claude Code: Add to CLAUDE.md
  • Workspace: Add to .github/copilot-instructions.md

Get the template at igenius-memory.info

How It Works

Agent ←→ MCP (stdio) ←→ igenius-mcp ←→ REST API ←→ iGenius Backend

The memory tools are a thin proxy — they translate MCP tool calls into REST API requests. All AI extraction, LLM summarization, and encryption happens server-side.

The visual tools run locally — Playwright renders URLs on your machine and a local vision model (e.g. LM Studio + Qwen2.5-VL) analyzes the screenshots. Screenshots and analysis never leave your machine.

Plans

Plan Price Requests API Keys IPs/Key
Starter Free 1,000/week 1 3
Pro $19/mo 50,000/day 5 10
Enterprise Contact 500,000/day 20 50

Details at igenius-memory.store

Links

Support the Project

iGenius Memory is built and maintained by NovaMind Labs. If you find it useful, here's how you can help:

  • Star the repo — it helps more developers discover iGenius
  • Upgrade to Pro — $19/mo directly funds development → igenius-memory.store
  • Report bugs & ideasopen an issue
  • Spread the word — tell your friends, tweet about it, write a blog post

Every user, star, and subscription helps keep iGenius alive and improving. Thank you!

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

igenius_mcp-0.5.1.tar.gz (14.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

igenius_mcp-0.5.1-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file igenius_mcp-0.5.1.tar.gz.

File metadata

  • Download URL: igenius_mcp-0.5.1.tar.gz
  • Upload date:
  • Size: 14.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for igenius_mcp-0.5.1.tar.gz
Algorithm Hash digest
SHA256 7aa150a0a469ea6096a5d05355f5a0f48be2adc6146e63766d1c54cd08c63b2f
MD5 5fc6205c6cc87ba6bb8e5228e44b8d3f
BLAKE2b-256 f6cc50a3070c69e75c85d81f7e8b8ecaefc9257de7b214ce7567d5aa9a51907f

See more details on using hashes here.

Provenance

The following attestation bundles were made for igenius_mcp-0.5.1.tar.gz:

Publisher: publish.yml on vehoelite/igenius-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file igenius_mcp-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: igenius_mcp-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 15.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for igenius_mcp-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 89103ddb720953fd8df802633fcfacc4f7292f4a34ae49f104e2e90e864254d7
MD5 33d5464d94e6d974635a9375bbf56161
BLAKE2b-256 d4fa147de49d738a8694187a70221ddfd9cabdf78f61fc4e9618ae425651d0d8

See more details on using hashes here.

Provenance

The following attestation bundles were made for igenius_mcp-0.5.1-py3-none-any.whl:

Publisher: publish.yml on vehoelite/igenius-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page