Vidurai — Local-First AI Memory & Context Engine (SF-V2)
Project description
Vidurai
Local-First AI Memory & Context Engine (SF-V2)
"विस्मृति भी विद्या है" — Forgetting too is knowledge.
Vidurai is a local-first memory layer for AI, purpose-built for developers who use AI assistants daily.
It sits between you and your AI tools (VS Code, terminal, ChatGPT / Claude / Gemini in the browser, LLM APIs) and builds a compressed, structured project memory from real activity — not just chat logs.
Instead of storing everything forever (like typical vector DB systems), Vidurai uses Smart Forgetting (SF-V2) to:
- retain high-signal events
- remove noise over time
- maintain a "spine" of the project
- generate multi-audience gists (developer, AI, manager, stakeholder)
Vidurai is not a single-app plugin. It is a universal context engine for human–AI workflows.
Why Vidurai?
Most AI workflows today:
- ❌ Forget everything between sessions
- ❌ Require constant re-explaining
- ❌ Burn tokens on unnecessary context
- ❌ Don't understand your project's evolution
Vidurai changes this:
- ✅ Watches real developer behavior
- ✅ Uses a Shared Event Schema to normalize all inputs
- ✅ Stores context locally inside a project-specific brain
- ✅ Runs Smart Forgetting (SF-V2) to reduce entropy
- ✅ Serves audience-specific context on demand
Result: Your AI behaves like a project-aware collaborator, not a stateless intern.
High-Level Architecture (v2.0.0)
[VS Code] [Browser UI] [Terminal]
│ │ │
▼ ▼ ▼
┌──────────────────────────────────────────────┐
│ Vidurai Daemon (Local) │
│ [Normalizer] -> [SF-V2 Engine] -> [DB] │
└──────────────────────┬───────────────────────┘
│
┌─────────────┴─────────────┐
▼ ▼
[SDK / CLI] [LLM Proxy]
(Audience Gists) (Context Injection)
Vidurai v2.0.0 consists of five core components:
Python SDK (this PyPI package)
- Smart Forgetting (SF-V2) primitives
- Project memory storage
- Forgetting ledger
- Shared Event Schema (Pydantic models)
viduraiCLI- Local data directory:
~/.vidurai/
Vidurai Daemon (local background service)
- Receives events from VS Code, Browser, Proxy, CLI
- Normalizes them into the shared schema
- Applies SF-V2 policies
- Stores context in the local project brain
- Serves gists via a local HTTP API (default
localhost:7777)
VS Code Extension
- Captures file edits
- Captures terminal commands
- Captures diagnostics (errors / warnings)
- Streams events to the Daemon
Browser Extension
- Captures prompts and replies from ChatGPT, Claude, Gemini, etc.
- Tracks AI conversations alongside code changes
- Supports context / gist injection into AI chats
LLM Proxy (optional)
- Wraps LLM API calls
- Injects Vidurai context automatically
- Useful for script / CLI / server-side workflows
Shared Event Schema
Every Vidurai tool emits events using the same structure:
{
"schema_version": "vidurai-events-v1",
"event_id": "uuid",
"timestamp": "2025-11-26T18:30:00.123Z",
"source": "vscode | browser | daemon | proxy | cli",
"channel": "human | ai | system",
"kind": "file_edit | terminal_command | diagnostic | ai_message | ai_response | memory_event | metric_event | system_event",
"project_root": "/abs/path",
"project_id": "stable-project-hash",
"session_id": "session-uuid",
"payload": { "...kind-specific fields..." }
}
- Python models live inside the SDK:
vidurai.shared.events - TypeScript models live inside the VS Code and Browser extensions
- All are generated from the canonical spec shared across tools
Smart Forgetting (SF-V2)
Vidurai does not store everything.
SF-V2 applies a biologically inspired forgetting model:
- Dual-trace memory (verbatim + gist)
- Retention scoring (importance, recency, access patterns)
- Salience models (what actually matters)
- Access frequency tracking
- Selective forgetting with a transparent ledger
- Manual pinning for critical memories
Over time, the memory becomes sharper, not bigger.
Multi-Audience Context
Vidurai can produce different summaries for different consumers:
| Audience | Content |
|---|---|
developer |
files, diffs, errors, terminal commands |
ai |
token-efficient context chunks optimized for LLM input |
manager |
"what changed?" summaries and progress updates |
stakeholder |
impact, risk, high-level narrative |
You choose the audience per query. The Daemon and SDK cooperate to generate the appropriate gist for each audience.
Installation
1. Install the Python SDK
pip install vidurai
Requires Python 3.9+.
2. Run the Vidurai Daemon
A. Docker (recommended)
docker run --rm -p 7777:7777 \
-v ~/.vidurai:/root/.vidurai \
chandantochandan/vidurai-daemon:2.0.0
B. From source
cd vidurai-daemon
pip install .
python daemon.py
3. (Optional) Run the LLM Proxy
docker run --rm -p 9999:9999 \
chandantochandan/vidurai-proxy:2.0.0
4. Connect Tools
VS Code
Install the extension from the Marketplace:
ID: vidurai.vidurai
Browser
Load the "Vidurai – Universal AI Context" extension in developer mode or install from Chrome Web Store once published.
Once connected, the extensions start streaming events to the Daemon.
Using the SDK
The SDK exposes helpers to talk to the local project brain and retrieve SF-V2 gists.
Note: The exact class and method names may evolve. Always refer to the latest API reference or
viduraipackage source.
Example shape of a typical integration:
from vidurai.vismriti_memory import VismritiMemory
# 1. Initialize memory for a project
memory = VismritiMemory(project_path=".")
# 2. Store a memory with automatic salience detection
memory.remember(
gist="Fixed authentication bug in /api/auth endpoint",
tags=["bugfix", "auth"],
metadata={"file": "auth.py", "line": 42}
)
# 3. Recall relevant context
results = memory.recall(
query="authentication issues",
limit=5
)
for mem in results:
print(f"[{mem.salience}] {mem.gist}")
# 4. Get project context for AI injection
context = memory.get_context(
query="current auth implementation",
max_tokens=2000
)
print(context)
This allows you to:
- inject compressed contexts into LLM prompts
- generate high-level updates for humans
- query recent project and conversation changes
Local-First & Privacy
Vidurai is designed around local-first privacy:
- All data lives under
~/.vidurai/ - The Daemon runs on
localhost - There is no cloud dependency by default
- Any future remote / team sync will always be opt-in, explicit, and documented
You control:
- what is stored
- which projects are tracked
- when to clear memory
Migration from v1.x
Vidurai v2.0.0 is a major architecture update.
Key changes:
- RL-centric v1.x design is now considered legacy
- Kosha-based classes are deprecated in favor of SF-V2 and the shared schema
- Daemon, SDK, Proxy, and extensions are aligned around a unified event model
- Storage layout and configuration have been simplified
To upgrade:
pip uninstall vidurai
pip install vidurai
Then reinstall / upgrade:
- Daemon
- VS Code extension
- Browser extension
Legacy design documents are preserved under:
docs/archive/implementation/
for historical reference.
Repository Layout
vidurai/ # Python SDK (this PyPI package)
vidurai-daemon/ # Daemon service
vidurai-vscode-extension/ # VS Code extension
vidurai-browser-extension/ # Browser extension
vidurai-proxy/ # Optional LLM proxy
tests/ # SDK tests
docs/ # Documentation
docs/archive/implementation/ # Historical PHASE_* docs
ARCHITECTURE_OVERVIEW.md # Canonical system architecture
CHANGELOG.md # Version history
AGENTS.md # AI agent protocol
BRAND.md # Brand & visual identity rules
active_context.md # Current project state
Links
- Website: https://vidurai.ai
- Docs: https://docs.vidurai.ai
- Source: https://github.com/chandantochandan/vidurai
- Bug Tracker: https://github.com/chandantochandan/vidurai/issues
- Changelog: https://github.com/chandantochandan/vidurai/blob/main/CHANGELOG.md
- Docker Hub: https://hub.docker.com/u/chandantochandan
License
Vidurai is released under the MIT License. See the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vidurai-2.0.0.tar.gz.
File metadata
- Download URL: vidurai-2.0.0.tar.gz
- Upload date:
- Size: 189.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4a09dcad11f902fa11cde2b30690e060269aa9f29e5295965b8dc2e5562cf907
|
|
| MD5 |
905667578cba4ac437c77276cf2c5097
|
|
| BLAKE2b-256 |
34212bc4d6181e362904c5c8e3b013178fdda7d9b0f34be1aa0f5cd46c81609a
|
File details
Details for the file vidurai-2.0.0-py3-none-any.whl.
File metadata
- Download URL: vidurai-2.0.0-py3-none-any.whl
- Upload date:
- Size: 150.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bdadf44e3cadec8a926004b6862fac04acc5a1017c11ae8b5937b4b2b976e6c3
|
|
| MD5 |
533a732cd3fb4f97a047592675ed82bf
|
|
| BLAKE2b-256 |
0548b1a5bb12795c6de583a03df445c69755e967fdce6ae040bfbd1166d64dcf
|