Skip to main content

Code Data Ark — local observability and intelligence platform for VS Code + Copilot Chat sessions

Project description

Code Data Ark

Python Version PyPI License: MIT

Code Data Ark (cda) is a local observability and intelligence platform for VS Code + GitHub Copilot Chat sessions. It ingests everything VS Code writes to disk — transcripts, tool calls, VFS blobs, workspace state — and runs a multi-stage pipeline to turn that raw activity into structured data you can actually reason about.

The core insight is that your chat history is not just logs. It carries behavioral signals: moments you corrected the agent, redirected it, expressed frustration, or confirmed that something finally worked. Ark extracts those signals, scores session quality with a heat model, and surfaces the patterns — so you can understand how you work with AI, not just what was said.

On top of that signal layer, Ark builds a semantic intelligence layer: embeddings over all your sessions, full-text and code-symbol search, anomaly alerts, session summaries, and related-session discovery. All of this lives in a local SQLite database, queryable via a 40+ command CLI or a background web dashboard.

The runtime is managed by an embedded process kernel (PMF) that supervises the watcher daemon, web UI, and pipeline tasks as background services — giving the whole system a lifecycle you can control without touching a process manager.

In short: point it at your VS Code data directory, run cda sync, and you have a searchable, annotated, semantically indexed record of every Copilot session you've ever had — with behavioral scores and anomaly detection included.

✨ Key Capabilities

  • Multi-stage pipeline: ingest → reconstruct → extract → embed — each stage enriches the data further
  • Behavioral signal detection: 200+ keyword patterns across 6 signal types; frustration, correction, recovery
  • Heat scoring: weighted session quality score (0–100) that tracks arc from friction to resolution
  • Semantic search: miniLM embeddings over all sessions for similarity, related-session discovery, and topic clustering
  • Full-text search: FTS5 index over all exchanges, tool calls, and code symbols
  • Live watcher daemon: monitors VS Code directories, queues changes, replays on crash
  • Background web UI: session drilldown, signal summaries, alert views, tool-call detail, VFS inspection
  • PMF Embedded Kernel: local service lifecycle management — start, stop, restart, status for all Ark daemons
  • Export workflows: JSON, JSONL, and plain-text session export

📋 Table of Contents

🚀 Installation

Prerequisites

  • Python 3.8+
  • VS Code with the Copilot Chat extension installed

Install from PyPI

pip install code-data-ark

Install with pipx

pipx install code-data-ark

Install from source

git clone https://github.com/goCosmix/cda.git
cd cda/source
pip install -e .

Install development dependencies

pip install -e ".[dev]"
# or
make install-dev

The cda console command is installed into your active Python environment's bin directory. Activate your virtual environment before running cda.

⚡ Quick Start

  1. Install
pip install code-data-ark
  1. Initialize — create ~/.cda/ and validate your VS Code data path
cda init
  1. Ingest all VS Code session data
cda sync
  1. Start the live watcher daemon
cda watch start
  1. Open the web dashboard
cda serve   # → http://127.0.0.1:10001
  1. Build semantic intelligence (optional, requires sentence-transformers)
cda embed build

🌐 Web UI

  • Background service: cda ui start
  • Stop service: cda ui stop
  • Service status: cda ui status
  • Foreground mode: cda serve

The web UI includes:

  • Session drilldown panels and charts
  • Behavioral signal summaries
  • Alert and recommendation views
  • Searchable transcript and tool-call detail
  • File/VFS browsing and raw session inspection

🧠 Core Features

  • Behavioral signals with 200+ keyword patterns across six categories
  • Frustration heat scoring and recovery analytics
  • Full-text search and semantic search with embeddings
  • Code symbol indexing for Python/JS/TS
  • Incremental ingestion with crash-resilient queue replay
  • Export workflows for JSON, JSONL, and text

📦 Package and Release

  • Published on PyPI as code-data-ark
  • Current release version: 2.0.2
  • CLI entry point: cda
  • License: MIT

🛣 Roadmap

See docs/roadmap.md for product direction, milestone planning, and release priorities.

🤝 Contributing

See contributing.md for development setup, test guidance, and PR workflow.

📜 License

This project is licensed under the MIT License.

🧠 SQLite limits and mitigation

  • Single writer in WAL mode: the system uses one writer process for ingest/reconstruct/extract/embed and allows many concurrent readers via SQLite WAL.
  • Large VFS blob handling: for very large raw artifacts, the clean approach is chunked storage or external file references instead of a single enormous BLOB.
  • Default 8KB page size / cache: this code now sets PRAGMA cache_size=-2000, PRAGMA mmap_size=268435456, and PRAGMA temp_store=MEMORY to improve read/cache performance on larger databases.
  • Further tuning: rebuild the DB with a larger page size (e.g. PRAGMA page_size=32768) if you need more efficient storage for very large session history.

🔧 Configuration

  • VS Code Data Directory: By default, assumes macOS paths (~/Library/Application Support/Code/User). Override with export VSCODE_DATA_DIR=/path/to/vscode/data (e.g., on Linux: ~/.config/Code/User).
  • No other config needed: Everything is CLI-driven with local SQLite.

🏗️ Architecture

VS Code Storage → ingest.py → vfs + sessions + transcripts
                      ↓
               reconstruct.py → exchanges (structured conversations)
                      ↓
               extract.py → signals + tokens + heat scores + analysis
                      ↓
               embed.py → semantic embeddings + summaries + alerts
                      ↓
               watcher.py → live sync + FTS indexing + queue resilience
                      ↓
               cda → query interface + policy enforcement

Core Components

Component Purpose Key Features
pipeline/ingest.py Data ingestion VFS storage, gzip compression, session metadata
pipeline/reconstruct.py Conversation processing Exchange threading, tool call linking, FTS indexing
pipeline/extract.py Signal analysis Behavioral pattern recognition, heat scoring, token accounting
pipeline/watcher.py Live monitoring File watching, incremental updates, crash recovery
pipeline/embed.py Semantic intelligence Embeddings, session summaries, anomaly alerts
kernel/pmf_kernel.py Service management Daemon lifecycle, PID/log tracking, runtime state
kernel/selfcheck.py System diagnostics Health checks, install validation, DB integrity
ui/cli.py CLI entry point 40+ commands, policy filtering, rich formatting
ui/web.py Web dashboard Browser UI for all CLI features, service control

Database Schema

  • workspaces - VS Code workspace metadata
  • sessions - Chat session information and metadata
  • vfs - Gzip-compressed file storage with SHA256 hashes
  • exchanges - Structured conversation turns with tool calls
  • exchange_signals - Behavioral signal annotations
  • symbols - Code symbol index (functions, classes, etc.)
  • token_usage - Per-request token consumption tracking
  • compactions - Context window summarization events
  • session_analysis - Aggregated session metrics and heat scores

🖥️ CLI Reference

Core Commands

# System Management
cda status              # Show daemon status and queue information
cda stats               # System-wide statistics and coverage
cda sync                # Full data ingestion and rebuild
cda reconstruct         # Rebuild conversations and search index
cda pmf services        # List embedded PMF runtime services
cda pmf status [service] # Show runtime status for PMF services
cda pmf start <service>  # Start a PMF-managed Ark service
cda pmf stop <service>   # Stop a PMF-managed Ark service
cda pmf restart <service> # Restart a PMF-managed Ark service
cda pmf logs <service>   # Tail runtime logs for a PMF service

# Session Analysis
cda sessions            # List all sessions (newest first)
cda session <id>        # Show detailed session information
cda workspace <id>      # Show sessions for a workspace
cda workspaces          # List all workspaces

# Search & Query
cda search <query>      # Full-text search across conversations
cda code-search <pattern> [--symbol] [--regex]  # Search code symbols or code content
cda semantic-search <query> # Semantic search using embeddings
cda similar <session>     # Find sessions similar to a session
cda related <session>     # Alias for semantic related sessions
cda summarize <session>   # Show session summary, topics, and recommendations
cda topics                # List semantic topic tags
cda alerts <session>      # Show semantic anomaly alerts
cda recommend <session>   # Show session recommendations
cda tools <query>       # Search tool call arguments
cda memory              # Show memory files and global state

# Behavioral Analysis
cda signals [session]   # Show behavioral signals
cda heat [session]      # Frustration and heat analysis
cda behavior            # Aggregate behavioral intelligence
cda saved               # Sessions that recovered from high heat

# Data Export
cda export <session>    # Export session as JSON/JSONL/text
cda replay <session>    # Print conversation as readable text

# Advanced
cda query <sql>         # Execute raw SQL queries
cda tokens [session]    # Token usage analysis
cda compactions [session] # Context compaction events
cda edits               # Edit session analytics

# Policy Management
cda policy allow <pattern>   # Add allow pattern
cda policy deny <pattern>    # Add deny pattern
cda policy list              # Show current policies

# Live Monitoring
cda watch start             # Start watcher daemon
cda watch stop              # Stop watcher daemon
cda watch restart           # Restart watcher daemon
cda ui start                # Start web UI background service
cda ui stop                 # Stop web UI background service
cda ui status               # Show web UI background service status

Command Examples

# Search for error handling discussions
cda search "error handling" --limit 20

# Find sessions with high frustration
cda heat --limit 10

# Search for specific functions in code
cda code-search "def process_data" --symbol

# Search code content with regex or plain text
cda code-search "timeout" --regex

# Find semantically related sessions
cda related abc123

# Summarize a session with semantic topics and recommendations
cda summarize abc123

# Export a session for external analysis
cda export abc123 --format jsonl --output session.jsonl

# Monitor live sessions
cda watch start
cda status  # Check queue status

📊 Data Analysis

Behavioral Signals

The system recognizes 6 signal types with 200+ keyword patterns:

Signal Type Weight Description Example Keywords
correction 3 User correcting agent behavior "stop", "wrong", "nope", "wait"
pre_correction 2 Early frustration signs "actually", "hold on", "slow down"
redirect 1 User changing direction "pivot", "change direction", "instead"
affirmation 0 Positive feedback "good", "right", "perfect", "thanks"
approval 0 Task completion approval "that works", "looks good", "approved"
frustration 5 Strong negative signals "this is broken", "not working", "terrible"

Heat Score Algorithm

Heat Score = min(100, Σ(signal_weights))
  • Peak Heat: Maximum heat reached in session
  • Final Heat: Heat at session end
  • Recovery: Sessions that return to low heat after high peaks
  • Saved Sessions: High-heat sessions that recover with affirmations

Token Usage Tracking

  • Per-request token consumption (prompt + completion)
  • Model identification and version tracking
  • Context compaction event logging
  • Cost estimation capabilities

⚙️ Configuration

Automatic Detection

Code Data Ark automatically detects paths using standard locations:

  • macOS: ~/Library/Application Support/Code/User/
  • Windows: %APPDATA%\Code\User\
  • Linux: ~/.config/Code/User/

Environment Variables

export CDA_DB=/path/to/custom.db          # Custom database location
export CDA_CONFIG=/path/to/config         # Custom config directory

Policy Configuration

Data access policies are stored in policy.txt:

ALLOW important-project
DENY sensitive-data
ALLOW *.py

🔧 Development

Setup Development Environment

pip install -e ".[dev]"

Running Tests

pytest tests/ -q

Code Quality

flake8 cda tests
mypy cda

Building

python -m build

Project Structure

cda/
├── .gitignore
├── source/                  # all tracked code (pushed to git)
│   ├── cda/
│   │   ├── pipeline/        # ingest, reconstruct, extract, embed, watcher, parse_edits
│   │   ├── ui/              # cli, web
│   │   └── kernel/          # pmf_kernel, selfcheck
│   ├── bin/release.py
│   ├── tests/
│   ├── docs/
│   └── pyproject.toml
├── local/               # runtime state (gitignored, host-only)
│   ├── data/            # cda.db
│   ├── logs/
│   ├── queue/
│   ├── run/
│   ├── config/
│   └── pmf/
└── control/             # management artifacts (gitignored, host-only)
    ├── data/            # control.db
    ├── scripts/
    ├── audit/
    └── scan/

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and add tests
  4. Run the test suite: make test
  5. Format code: make format
  6. Commit your changes: git commit -m 'Add amazing feature'
  7. Push to the branch: git push origin feature/amazing-feature
  8. Open a Pull Request

Development Guidelines

  • Tests: Unit tests for all new functionality
  • Linting: Code must pass flake8 and mypy before pushing
  • Versioning: Keep version, pyproject.toml, and changelog.md in sync

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Built for analyzing VS Code/Copilot Chat interaction patterns
  • Inspired by the need for better human-AI interaction insights
  • Uses SQLite FTS5 for high-performance full-text search
  • Implements behavioral signal processing for conversation analysis

Code Data Ark (cda) - Understanding the human side of AI conversations.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

code_data_ark-2.0.3.tar.gz (107.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

code_data_ark-2.0.3-py3-none-any.whl (95.5 kB view details)

Uploaded Python 3

File details

Details for the file code_data_ark-2.0.3.tar.gz.

File metadata

  • Download URL: code_data_ark-2.0.3.tar.gz
  • Upload date:
  • Size: 107.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for code_data_ark-2.0.3.tar.gz
Algorithm Hash digest
SHA256 2789971e4c2e4b1054aa29cbd1cd4c74ed1a46707b16e55471087b2203706e59
MD5 8efbab87e003b3198ecc48801ec3b9fe
BLAKE2b-256 29031037b1e90fa7e642ac1ce8b690cea818c2c2cb7674e55a03e1fd6e13ac10

See more details on using hashes here.

File details

Details for the file code_data_ark-2.0.3-py3-none-any.whl.

File metadata

  • Download URL: code_data_ark-2.0.3-py3-none-any.whl
  • Upload date:
  • Size: 95.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for code_data_ark-2.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d9862572d76d3e05f82cf8b81735f8aef1a9abb84baccc7ed74f352ad3c5fb2a
MD5 b972329c7d7eab015e8899c5d1e7fe86
BLAKE2b-256 8951df16ed60b4bf831e6156ff10d44ba54ddeecc469faa9fb0333c79a9d92be

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page