MCP server for iGenius Memory — gives AI agents persistent memory tools via the hosted API
Project description
iGenius Memory — Persistent AI Memory for Any Agent
A structured, AI-powered memory backend that gives any MCP-compatible agent persistent memory via the iGenius Memory service. All AI processing happens server-side — you just need an API key.
3 Ways to Use iGenius
| Client | Install | Best For |
|---|---|---|
| 🧩 VS Code Extension | Marketplace | Full sidebar UI, memory browser, AI provider settings |
| ⚡ MCP Server | pip install igenius-mcp |
Any MCP client — VS Code, Claude Desktop, Cursor, Windsurf |
| 🖥️ Desktop App | Windows Installer | Standalone system-tray app, works with any editor |
Get a free API key at igenius-memory.online — all three clients use the same key.
1. VS Code Extension (Marketplace)
Install directly from the VS Code Marketplace — no pip, no config files:
ext install igenius-memory.igenius-memory
Or search "iGenius Memory" in the Extensions panel. Includes sidebar UI, memory browser, status bar indicator, AI provider settings, and auto-warms briefings on a configurable interval.
2. MCP Server (pip)
For any MCP-compatible client (VS Code Copilot, Claude Desktop, Cursor, Windsurf, etc.):
pip install igenius-mcp
Then add to your MCP config:
VS Code — ~/.vscode/mcp.json:
{
"servers": {
"igenius-memory": {
"command": "igenius-mcp",
"env": { "IGENIUS_API_KEY": "ig_your_key_here" },
"type": "stdio"
}
}
}
Claude Desktop / Cursor / Windsurf — add to your MCP config file:
{
"mcpServers": {
"igenius-memory": {
"command": "python",
"args": ["-m", "igenius_mcp.server"],
"env": { "IGENIUS_API_KEY": "ig_your_key_here" }
}
}
}
⚠️ Windows users: If VS Code can't find
igenius-mcp, usepython -m igenius_mcp.serverinstead.
3. Desktop App (Windows)
Standalone system-tray application — works alongside any editor or IDE:
- Download Installer (NSIS setup or MSI)
- Built with Tauri + Rust — lightweight, native, ~5 MB
- System tray with quick access to briefings, search, and memory stats
- Configure LLM provider (LM Studio, OpenAI, Anthropic, Google) from the UI
Restart VS Code after installing the extension or adding MCP config — all 17 memory tools become available to Copilot and any MCP-compatible agent.
Available Tools
| Tool | Description |
|---|---|
memory_briefing |
Session briefing from all memory layers (call FIRST) |
memory_ingest |
Ingest user/agent messages for AI extraction |
memory_consolidate |
Merge accumulated extracts into master briefing |
memory_process |
Detect trigger words and auto-classify text |
memory_store |
Direct store to a specific memory layer |
memory_search |
Natural language search across memories |
memory_recall |
Retrieve all persistent session memories |
memory_summarize |
LLM-powered summary of a memory layer |
memory_delete |
Delete a memory by ID |
memory_update |
Update fields on an existing memory |
memory_review |
List short-term memories for triage |
memory_promote |
Promote short-term → long-term |
memory_pin |
Pin a fact permanently (user-confirmed, never expires) |
memory_triggers_list |
List trigger words and their layers |
memory_triggers_add |
Add a new trigger word |
visual_report |
Render URL → screenshot → vision analysis → full UI/UX report (requires [visual]) |
visual_screenshot |
Render URL → return base64 PNG (requires [visual]) |
LLM Requirements
iGenius uses an LLM backend for AI extraction, consolidation, and (optionally) visual analysis. You can use a local or remote LLM provider.
Local Setup (LM Studio, Ollama, etc.)
| Requirement | Minimum |
|---|---|
| GPU VRAM | 6 GB+ |
| Recommended model | Qwen 3.5 4B (non-thinking) or equivalent |
| Context window | 3,000+ tokens |
⚠️ IMPORTANT: Do NOT use thinking/reasoning models (e.g. QwQ, DeepSeek R1, o1, o3). Thinking models emit
<think>chains before the actual response, which breaks iGenius's structured JSON extraction pipeline. Only use standard non-thinking (instruct/chat) models.
Why these specs? iGenius sends structured extraction prompts that expect clean JSON output. A 4B-parameter non-thinking model at 3k context is the sweet spot for fast, accurate extraction without hallucination or timeouts. Larger models (8B+) work too — just ensure you have the VRAM headroom and that the model is a non-thinking variant.
Remote Setup (OpenAI, Anthropic, Google, etc.)
No local hardware requirements. Any API-accessible model works — configure the provider, model name, and API key in the VS Code extension settings or environment variables.
Environment Variables
| Variable | Required | Default |
|---|---|---|
IGENIUS_API_KEY |
Yes | — |
IGENIUS_API_URL |
No | https://igenius-memory.online/v1 |
Visual Tools (Optional)
Give your AI agent eyes — render any URL, take a pixel-perfect screenshot, and get instant UI/UX analysis from a local vision model.
Install
pip install "igenius-mcp[visual]"
python -m playwright install chromium
Then load a vision-capable model in LM Studio (e.g. Qwen 3.5 9B Vision, non-thinking).
⚠️ Do NOT use thinking/reasoning vision models — same restriction as above.
Visual MCP Tools
| Tool | Description |
|---|---|
visual_report |
Render URL → screenshot → vision analysis → full UI/UX report |
visual_screenshot |
Render URL → return base64-encoded PNG (no analysis) |
Visual Environment Variables
| Variable | Default | Description |
|---|---|---|
IGENIUS_VISION_URL |
http://localhost:1234/v1 |
Vision model API endpoint |
IGENIUS_VISION_MODEL |
auto-detect | Override the vision model name |
IGENIUS_VISION_KEY |
— | API key for vision endpoint (e.g. LM Studio auth token) |
IGENIUS_VIEWPORT_W |
1280 |
Screenshot viewport width |
IGENIUS_VIEWPORT_H |
800 |
Screenshot viewport height |
100% local — screenshots and analysis never leave your machine.
Agent Instructions
For best results, add the iGenius agent instructions to your workspace:
- VS Code: Place
igenius.instructions.mdin~/.vscode/prompts/ - Claude Code: Add to
CLAUDE.md - Workspace: Add to
.github/copilot-instructions.md
Get the template at igenius-memory.info
How It Works
Agent ←→ MCP (stdio) ←→ igenius-mcp ←→ REST API ←→ iGenius Backend
The memory tools are a thin proxy — they translate MCP tool calls into REST API requests. All AI extraction, LLM summarization, and encryption happens server-side.
The visual tools run locally — Playwright renders URLs on your machine and a local vision model (e.g. LM Studio + Qwen2.5-VL) analyzes the screenshots. Screenshots and analysis never leave your machine.
Plans
| Plan | Price | Requests | API Keys | IPs/Key |
|---|---|---|---|---|
| Starter | Free | 1,000/week | 1 | 3 |
| Pro | $19/mo | 50,000/day | 5 | 10 |
| Enterprise | Contact | 500,000/day | 20 | 50 |
Details at igenius-memory.store
Coming Soon
iGenius Context Engine — unlimited effective context for local LLMs through intelligent recursive summarization. Run a 3B model with a 4K context window and handle conversations of any length.
Links
Support the Project
iGenius Memory is built and maintained by NovaMind Labs. If you find it useful, here's how you can help:
- Star the repo — it helps more developers discover iGenius
- Upgrade to Pro — $19/mo directly funds development → igenius-memory.store
- Report bugs & ideas — open an issue
- Spread the word — tell your friends, tweet about it, write a blog post
Every user, star, and subscription helps keep iGenius alive and improving. Thank you!
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file igenius_mcp-0.5.2.tar.gz.
File metadata
- Download URL: igenius_mcp-0.5.2.tar.gz
- Upload date:
- Size: 15.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a6684a15353fc88a5a9a41812e72c2748a659aa5eb46ecaa982229bf08a8f4b7
|
|
| MD5 |
c4884e3af78c46c5f9e0fe3c39df65d1
|
|
| BLAKE2b-256 |
9f56200ea46efa9f59d6095e70b014778da2173f1c4b2d3d12d1cd7785dfd63c
|
Provenance
The following attestation bundles were made for igenius_mcp-0.5.2.tar.gz:
Publisher:
publish.yml on vehoelite/igenius-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
igenius_mcp-0.5.2.tar.gz -
Subject digest:
a6684a15353fc88a5a9a41812e72c2748a659aa5eb46ecaa982229bf08a8f4b7 - Sigstore transparency entry: 1137292287
- Sigstore integration time:
-
Permalink:
vehoelite/igenius-mcp@d179d0dad2b8252b3e998d4c4b344deb9fee311e -
Branch / Tag:
refs/tags/v0.5.2 - Owner: https://github.com/vehoelite
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d179d0dad2b8252b3e998d4c4b344deb9fee311e -
Trigger Event:
release
-
Statement type:
File details
Details for the file igenius_mcp-0.5.2-py3-none-any.whl.
File metadata
- Download URL: igenius_mcp-0.5.2-py3-none-any.whl
- Upload date:
- Size: 16.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2737d76f198fd725c0b302fd19013745e8f1100ee0e6553f9901ad0883f92245
|
|
| MD5 |
4d36e1b8654e2e460c587d295af31bc1
|
|
| BLAKE2b-256 |
6ee1eeb8b5a0746eef6e9aa19197e2e4a704d4298a5b6418021c3184188ae6ae
|
Provenance
The following attestation bundles were made for igenius_mcp-0.5.2-py3-none-any.whl:
Publisher:
publish.yml on vehoelite/igenius-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
igenius_mcp-0.5.2-py3-none-any.whl -
Subject digest:
2737d76f198fd725c0b302fd19013745e8f1100ee0e6553f9901ad0883f92245 - Sigstore transparency entry: 1137292354
- Sigstore integration time:
-
Permalink:
vehoelite/igenius-mcp@d179d0dad2b8252b3e998d4c4b344deb9fee311e -
Branch / Tag:
refs/tags/v0.5.2 - Owner: https://github.com/vehoelite
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d179d0dad2b8252b3e998d4c4b344deb9fee311e -
Trigger Event:
release
-
Statement type: