Skip to main content

Time Machine for AI Agents — Cognitive Version Control for LLM context

Project description

⏳ Time Machine for AI Agents

Cognitive Version Control (CVC)

Save. Branch. Rewind. Merge. — Your AI agent just got an undo button.


PyPI License: MIT Python Version Status PRs Welcome

GitHub Stars GitHub Forks GitHub Issues

pip install tm-ai

✨ Features🚀 Quick Start📖 Documentation🤝 Contributing💬 Community



Your AI coding agent is brilliant — for about 20 minutes.

Then it forgets what it already fixed, contradicts its own plan,
and loops on the same error for eternity.

Sound familiar?



🧠 What Is This?

Time Machine for AI Agents gives AI coding agents something they've never had: memory management that actually works.

Git, but for the AI's brain.
Instead of versioning source code, CVC versions the agent's entire context — every thought, every decision, every conversation turn — as an immutable, cryptographic Merkle DAG.

The agent can checkpoint its reasoning, branch into risky experiments, rewind when stuck, and merge only the insights that matter.


💾 Save 🌿 Branch 🔀 Merge ⏪ Rewind
Checkpoint the agent's brain at any stable moment. Explore risky ideas in isolation. Main context stays clean. Merge learnings back — not raw logs. Semantic, not syntactic. Stuck in a loop? Time-travel back instantly.


🔥 The Problem We're Solving

The industry keeps making context windows bigger — 4K → 32K → 128K → 1M+ tokens

It's not progress.

Research shows that after ~60% context utilisation, LLM reasoning quality falls off a cliff. One hallucination poisons everything that follows. Error cascades compound. The agent starts fighting itself.

A bigger window doesn't fix context rot.
It just gives it more room to spread.

The Real Issue

AI agents have zero ability to manage their own cognitive state. They can't save their work. They can't explore safely. They can't undo mistakes. They're solving a 500-piece puzzle while someone keeps removing pieces from the table.


📊 What the Research Shows

58.1%
Context reduction via branching
3.5×
Success rate improvement with rollback
~90%
Cost reduction through caching
~85%
Latency reduction
ContextBranch paper GCC paper Prompt caching Cached tokens skip processing


⚙️ How It Works

CVC runs as a local proxy between your agent and the LLM provider.
The agent talks to CVC like any normal API. Behind the scenes, CVC manages everything.


%%{init: {'theme': 'dark', 'themeVariables': { 'fontSize': '14px', 'primaryColor': '#1a1a2e', 'primaryTextColor': '#e0e0e0', 'primaryBorderColor': '#7c3aed', 'lineColor': '#7c3aed', 'secondaryColor': '#16213e', 'tertiaryColor': '#0f3460', 'edgeLabelBackground': '#1a1a2e'}}}%%

flowchart LR
    subgraph LOCAL["🖥️  YOUR MACHINE"]
        direction TB

        subgraph AGENT["  Agent  "]
            A["🤖 Cursor / VS Code / Custom"]
        end

        subgraph PROXY["  CVC Proxy · localhost:8000  "]
            direction TB
            R["⚡ LangGraph Router"]
            R -->|CVC ops| E["🧠 Cognitive Engine"]
            R -->|passthrough| FWD["📡 Forward to LLM"]
        end

        subgraph STORAGE["  .cvc/ directory  "]
            direction LR
            S1["🗄️ SQLite\nCommit Graph\n& Metadata"]
            S2["📦 CAS Blobs\nZstandard\nCompressed"]
            S3["🔍 Chroma\nSemantic\nVectors"]
        end

        A -- "HTTP request" --> R
        E --> S1 & S2 & S3
    end

    subgraph CLOUD["  ☁️ LLM Provider  "]
        direction TB
        C1["Claude"]
        C2["GPT-5.2"]
        C3["Gemini"]
        C4["Ollama 🏠"]
    end

    FWD -- "HTTPS" --> CLOUD
    CLOUD -- "response" --> R
    R -- "response" --> A

    style LOCAL fill:#0d1117,stroke:#7c3aed,stroke-width:2px,color:#e0e0e0
    style AGENT fill:#1a1a2e,stroke:#58a6ff,stroke-width:1px,color:#e0e0e0
    style PROXY fill:#161b22,stroke:#7c3aed,stroke-width:2px,color:#e0e0e0
    style STORAGE fill:#161b22,stroke:#f0883e,stroke-width:1px,color:#e0e0e0
    style CLOUD fill:#0d1117,stroke:#58a6ff,stroke-width:2px,color:#e0e0e0
    style A fill:#1f6feb,stroke:#58a6ff,color:#ffffff
    style R fill:#7c3aed,stroke:#a78bfa,color:#ffffff
    style E fill:#238636,stroke:#3fb950,color:#ffffff
    style FWD fill:#1f6feb,stroke:#58a6ff,color:#ffffff
    style S1 fill:#21262d,stroke:#f0883e,color:#e0e0e0
    style S2 fill:#21262d,stroke:#f0883e,color:#e0e0e0
    style S3 fill:#21262d,stroke:#f0883e,color:#e0e0e0
    style C1 fill:#da7756,stroke:#f0883e,color:#ffffff
    style C2 fill:#1f6feb,stroke:#58a6ff,color:#ffffff
    style C3 fill:#238636,stroke:#3fb950,color:#ffffff
    style C4 fill:#6e7681,stroke:#8b949e,color:#ffffff

🎯 Three-Tiered Storage (All Local)

Tier What Why
🗄️ SQLite Commit graph, branch pointers, metadata Fast traversal, zero-config, works everywhere
📦 CAS Blobs Compressed context snapshots (Zstandard) Content-addressable, deduplicated, efficient
🔍 Chroma Semantic embeddings (optional) "Have I solved this before?" — search by meaning

✨ Everything stays in .cvc/ inside your project
🔒 No cloud • No telemetry • Your agent's thoughts are yours



🚀 Get Started


Prerequisites

Python 3.11+ • Git (for VCS bridge features)


📦 Install

Available on PyPI — install in one command, no cloning required.

pip install tm-ai

That's it. The cvc command is now available globally.

🔧 More install options
# With uv (faster)
uv pip install tm-ai

# As an isolated uv tool (always on PATH, no venv needed)
uv tool install tm-ai

# With provider extras
pip install "tm-ai[anthropic]"     # Anthropic (Claude)
pip install "tm-ai[openai]"        # OpenAI (GPT)
pip install "tm-ai[google]"        # Google (Gemini)
pip install "tm-ai[all]"           # Everything
🛠️ For contributors / local development
git clone https://github.com/mannuking/AI-Cognitive-Version-Control.git
cd AI-Cognitive-Version-Control
uv sync --extra dev           # or: pip install -e ".[dev]"

Cross-platform: Works on Windows, macOS, and Linux.
Global config is stored in the platform-appropriate location:

  • Windows: %LOCALAPPDATA%\cvc\config.json
  • macOS: ~/Library/Application Support/cvc/config.json
  • Linux: ~/.config/cvc/config.json

▶️ Run

# Navigate to any project where you want CVC
cd ~/my-project

# Interactive guided setup (picks provider, saves preferences, initialises .cvc/)
cvc setup

# — OR — manual setup:
cvc init

🔑 Set Your API Key

ProviderBash / Linux / macOSPowerShell
Anthropic export ANTHROPIC_API_KEY="sk-ant-..." $env:ANTHROPIC_API_KEY = "sk-ant-..."
OpenAI export OPENAI_API_KEY="sk-..." $env:OPENAI_API_KEY = "sk-..."
Google export GOOGLE_API_KEY="AIza..." $env:GOOGLE_API_KEY = "AIza..."
Ollama No key needed — just run ollama serve and ollama pull qwen2.5-coder:7b
# Start the proxy with your chosen provider
CVC_PROVIDER=anthropic cvc serve    # or: openai, google, ollama

# (Optional) Install Git hooks for automatic sync
cvc install-hooks

🔌 Connect Your Agent

Point your AI agent's API base URL to http://127.0.0.1:8000

CVC exposes an OpenAI-compatible /v1/chat/completions endpoint — any tool that speaks OpenAI format works out of the box.


Tool How to Connect
🎯 Cursor Set CVC as the API base URL in settings
💻 VS Code + Copilot Route through CVC proxy
🔧 Custom Agents Standard OpenAI SDK, point to localhost:8000
🦜 LangChain / CrewAI / AutoGen Use CVC's 4 function-calling tools (GET /cvc/tools)


📟 CLI Reference


Command Description
cvc setup Interactive first-time setup (choose provider & model)
cvc init Initialize .cvc/ in your project
cvc serve Start the Cognitive Proxy
cvc status Show branch, HEAD, context size
cvc log View commit history
cvc commit -m "message" Create a cognitive checkpoint
cvc branch <name> Create an exploration branch
cvc merge <branch> Semantic merge into active branch
cvc restore <hash> Time-travel to a previous state
cvc install-hooks Install Git ↔ CVC sync hooks
cvc capture-snapshot Link current Git commit to CVC state


🔗 Git Integration

CVC doesn't replace Git — it bridges with it.


Feature What It Does
🌲 Shadow Branches CVC state lives on cvc/main, keeping your main branch clean
📝 Git Notes Every git commit is annotated with the CVC hash — "What was the AI thinking when it wrote this?"
🔄 post-commit hook Auto-captures cognitive state after every git commit
⏰ post-checkout hook Auto-restores the agent's brain when you git checkout an old commit

📜 When you check out an old version of your code, CVC automatically restores
the agent's context to what it was when that code was written.

True cognitive time-travel.



⚡ Why It's Cheap

CVC structures prompts so committed history becomes a cacheable prefix.
When you rewind to a checkpoint, the model doesn't reprocess anything it's already seen.


Metric ❌ Without CVC ✅ With CVC
💰 Cost per restore Full price ~90% cheaper
⚡ Latency per restore Full processing ~85% faster
🔄 Checkpoint frequency Impractical Economically viable

🔥 Works today with Anthropic, OpenAI, Google Gemini, and Ollama
💡 Prompt caching optimised per provider



🤖 Supported Providers

Pick your provider. CVC handles the rest.


Provider Default Model Alternatives Notes
Anthropic claude-opus-4-6 claude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5 Prompt caching with cache_control
OpenAI gpt-5.2 gpt-5.2-codex, gpt-5-mini, gpt-4.1 Automatic prefix caching
Google gemini-3-pro gemini-3-flash, gemini-2.5-flash, gemini-2.5-pro OpenAI-compatible endpoint
Ollama qwen2.5-coder:7b qwen3-coder:30b, devstral:24b, deepseek-r1:8b 100% local, no API key needed


⚙️ Configuration

All via environment variables — no config files to manage


Variable Default What It Does
CVC_AGENT_ID sofia Agent identifier
CVC_DEFAULT_BRANCH main Default branch
CVC_ANCHOR_INTERVAL 10 Full snapshot every N commits (others are delta-compressed)
CVC_PROVIDER anthropic LLM provider
CVC_MODEL auto Model name (auto-detected per provider)
ANTHROPIC_API_KEY Required for anthropic provider
OPENAI_API_KEY Required for openai provider
GOOGLE_API_KEY Required for google provider
CVC_HOST 127.0.0.1 Proxy host
CVC_PORT 8000 Proxy port
CVC_VECTOR_ENABLED false Enable semantic search (Chroma)


🎯 Who Is This For?


👤 Solo Developers 🏢 Teams & Organizations 🌐 Open Source

Your AI stops losing context mid-session. Explore multiple approaches. Undo mistakes. Never re-explain the same thing twice.


Review the AI's reasoning, not just its output. Cryptographic audit trails. Shared cognitive state across team members. Compliance-ready.


See how an AI-generated PR was produced. Inspect for hallucination patterns. Build project knowledge bases from commit embeddings.



🗺️ Roadmap


Feature Description
✅ OpenAI Adapter GPT-5.2 / GPT-5.2-Codex / GPT-5-mini
✅ Google Gemini Adapter Gemini 3 Pro / Flash / 2.5 Flash
✅ Ollama (Local) Qwen 2.5 Coder / Qwen 3 Coder / DeepSeek-R1 / Devstral
🎨 VS Code Extension Visual commit graph and time-travel slider
🌐 MCP Server Native Model Context Protocol integration
👥 Multi-agent support Shared CVC database with conflict resolution
☁️ Cloud sync S3/MinIO for team collaboration
📊 Metrics dashboard Cache hit rates, context utilisation, branch success rates


🤝 Contributing

This repo is public and open to collaboration.

Whether you're fixing a typo or building an entirely new provider adapter — contributions are welcome.


ForkBranchCommitPushPR


🎯 Areas Where Help Is Needed

Area Difficulty
🔌 Additional Provider Adapters (Mistral, Cohere, etc.) 🟡 Medium
🧪 Tests & edge cases 🟢 Easy–Medium
🖥️ VS Code Extension 🔴 Hard
🌐 MCP Server 🟡 Medium
🔒 Security audit 🟠 Medium–Hard

🛠️ Dev Setup

git clone https://github.com/YOUR_USERNAME/AI-Cognitive-Version-Control.git
cd AI-Cognitive-Version-Control
uv sync --extra dev


📚 Research

CVC is grounded in published research


Paper Key Finding
ContextBranch 58.1% context reduction via branching
GCC 11.7% → 40.7% success with rollback
Merkle-CRDTs Structural deduplication for DAGs
Prompt Caching Anthropic/OpenAI/Google token reuse


📜 License

MIT — see LICENSE





✨ Because AI agents deserve an undo button. ✨


⭐ Star this repo if you believe in giving AI agents memory that actually works.


GitHub Stars GitHub Forks GitHub Watchers


Made with ❤️ by developers who got tired of AI agents forgetting what they just did.


⭐ Star · 🐛 Bug · 💡 Feature · 🔀 PR

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tm_ai-0.2.2.tar.gz (506.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tm_ai-0.2.2-py3-none-any.whl (59.8 kB view details)

Uploaded Python 3

File details

Details for the file tm_ai-0.2.2.tar.gz.

File metadata

  • Download URL: tm_ai-0.2.2.tar.gz
  • Upload date:
  • Size: 506.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.18

File hashes

Hashes for tm_ai-0.2.2.tar.gz
Algorithm Hash digest
SHA256 0648882632345c8f90f28cd03c707fb13f8bcbfe2cb4ecde7572c0c863f77066
MD5 b2b056d1c37b62b8ba819588e3550456
BLAKE2b-256 c445139beff00abf81b4f0acc226d8e665d51fd6001b272113614704aaf3c605

See more details on using hashes here.

File details

Details for the file tm_ai-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: tm_ai-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 59.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.18

File hashes

Hashes for tm_ai-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a01f232fbf95af99fbb57e59fb698c37516e94d4c8db9edd5dfcae2b406ba66e
MD5 1ed80da1e1d10e69f724bd76a8db224b
BLAKE2b-256 82c9c35dffee4645abe943ffe0e5d0cda7a87a338ea1605b33a95ff5e35e0b17

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page