Skip to main content

Multi-CLI session tracking and normalization for QuickCall

Project description

qc-trace

A pure Python library that normalizes AI CLI session data from multiple tools into a unified schema, stores it in PostgreSQL, and provides a live dashboard to visualize the data flow.

Supported sources: Claude Code, Codex CLI, Gemini CLI, Cursor IDE

Table of Contents

Architecture

graph LR
    subgraph Sources
        A1["~/.claude/**/*.jsonl"]
        A2["~/.codex/**/*.jsonl"]
        A3["~/.gemini/**/session-*.json"]
        A4["~/.cursor/**/*.txt"]
    end

    A1 & A2 & A3 & A4 --> D["Daemon\n(file watcher)"]
    D -- "POST /ingest" --> S["Ingest Server\n:19777"]
    D -- "POST /api/file-progress" --> S
    S -- "COPY batch write" --> P[("PostgreSQL\n:5432")]
    P -- "read queries" --> S
    S -- "GET /api/*" --> UI["Dashboard\n:5173"]

Components

Component Description
Daemon (quickcall) Watches local AI tool session files, transforms them into normalized messages, pushes to the ingest server. Zero third-party dependencies.
Ingest Server HTTP server (:19777) that accepts normalized messages, batch-writes via PostgreSQL COPY, and serves the read API for the dashboard. Opt-in API key authentication.
PostgreSQL Stores sessions, messages, tool calls, tool results, token usage, and file progress. Schema auto-applied on startup (current version: v4).
Dashboard Vite + React + TypeScript + Tailwind. Overview stats, session list, message detail with expandable tool calls, thinking content, and token counts.

Data flow

  1. Daemon polls source directories every 5s for new/changed session files
  2. Source-specific collectors parse files incrementally (JSONL: line-resume, JSON/text: content-hash)
  3. Transforms normalize data into NormalizedMessage schema
  4. Pusher batches messages (100/batch) and POSTs to /ingest with retry + exponential backoff
  5. After successful push, daemon reports its read position via POST /api/file-progress
  6. Server's batch accumulator flushes to PostgreSQL via COPY (100 msgs or 5s, whichever first)
  7. On daemon restart, reconciliation compares local state against server's /api/sync endpoint

Quick Start

# 1. Start PostgreSQL
scripts/dev-db.sh start

# 2. Start the ingest server
uv run python -m qc_trace.server.app

# 3. Start the daemon
uv run quickcall start

# 4. Start the dashboard
cd dashboard && npm run dev

User Setup (Daemon Only)

For developers who use AI CLI tools and want their session data tracked. The daemon watches local session files and pushes them to the ingest server. No database, no Docker, no dashboard needed on your machine.

Install

curl -fsSL https://quickcall.dev/trace/install.sh | sh

This installs uv (if needed), installs the quickcall CLI, sets up the daemon as a system service, and starts it. Idempotent — safe to re-run.

CLI Commands

quickcall start          # Start daemon (background)
quickcall stop           # Stop daemon
quickcall status         # Show daemon status, per-source stats, server health
quickcall status --json  # Machine-readable status output
quickcall logs           # View recent logs
quickcall logs -f        # Follow logs
quickcall setup          # Configure email and API key
quickcall db init        # Initialize DB schema via server

Start / Stop / Restart (local development)

# Start the daemon (runs in background)
uv run quickcall start

# Check what's happening
uv run quickcall status

# Stop it
uv run quickcall stop

# Restart (stop + start)
uv run quickcall stop && uv run quickcall start

When installed as a system service (via install.sh), the daemon starts on login and auto-restarts on crash. Use quickcall directly (no uv run).

Environment Variables

Variable Default Description
QC_TRACE_INGEST_URL http://localhost:19777/ingest Target ingest server URL
# Point to the central ingest server (pilot)
QC_TRACE_INGEST_URL=https://trace.quickcall.dev/ingest quickcall start

# Or run against a local dev server (uses default)
quickcall start

Watched file patterns

Source Glob (relative to $HOME)
Claude Code .claude/projects/**/*.jsonl
Codex CLI .codex/sessions/*/*/*/rollout-*.jsonl
Gemini CLI .gemini/tmp/*/chats/session-*.json
Cursor .cursor/projects/*/agent-transcripts/*.txt

Daemon files

File Path Purpose
State ~/.quickcall-trace/state.json Processing progress per file
PID ~/.quickcall-trace/quickcall.pid Running daemon PID
Log ~/.quickcall-trace/quickcall.log stdout
Errors ~/.quickcall-trace/quickcall.err stderr
Config ~/.quickcall-trace/config.json Optional API key config

Developer Setup (Full Stack)

For contributors developing the daemon, ingest server, dashboard, or schema transforms.

Prerequisites

  • Python 3.11+
  • Docker (for PostgreSQL)
  • Node.js 18+ (for dashboard)
  • uv (recommended)

1. Clone and set up Python

git clone git@github.com:quickcall-dev/trace.git
cd trace
uv sync --all-extras

2. Start PostgreSQL

scripts/dev-db.sh start

Starts PostgreSQL 16 on port 5432. Schema auto-applied on first server connection. Data persists in Docker volume (qc_trace_pgdata).

Default connection: postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace

3. Start the ingest server

uv run python -m qc_trace.server.app

Starts on localhost:19777.

4. Start the daemon

uv run quickcall start

5. Start the dashboard

cd dashboard
npm install
npm run dev

Opens at http://localhost:5173. Shows:

  • Overview — pipeline health, aggregate stats, live message feed, source distribution
  • Sessions — filterable table with drill-down
  • Session Detail — full message list with expandable tool calls, thinking content, and token counts

Quick test (without the daemon)

curl -X POST http://localhost:19777/ingest \
  -H 'Content-Type: application/json' \
  -d '[{"id":"test-1","session_id":"s1","source":"claude_code","msg_type":"user","timestamp":"2026-02-06T00:00:00Z","content":"hello world","source_schema_version":1}]'

Environment Variables

Variable Default Description
QC_TRACE_DSN postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace PostgreSQL connection string
QC_TRACE_PORT 19777 Ingest server listen port
QC_TRACE_INGEST_URL http://localhost:19777/ingest Daemon target server URL
QC_TRACE_API_KEYS (empty — auth disabled) Comma-separated API keys. When set, all endpoints except /health require X-API-Key header
QC_TRACE_CORS_ORIGIN http://localhost:3000 Allowed CORS origin for dashboard

Troubleshooting

Dashboard shows 0 sessions after a restart

Postgres data is lost if the Docker volume doesn't survive reboot, but the daemon's state file (~/.quickcall-trace/state.json) still has files marked as processed.

Fix: reset the state file and restart the daemon.

rm ~/.quickcall-trace/state.json
quickcall stop
quickcall start

This is always safe — the writer uses ON CONFLICT DO NOTHING so duplicate messages are silently skipped.

Daemon/server line mismatch

The daemon tracks its actual read position via file_progress (separate from message storage). On startup, reconciliation compares local state against the server and rewinds if needed. If you suspect mismatches:

# Check server's view of file progress
curl http://localhost:19777/api/sync

Development

Run tests

# All 296 tests
uv run pytest tests/ -v

# Single file
uv run pytest tests/test_transforms.py

# With coverage
uv run pytest tests/ --cov=qc_trace --cov-report=html

Project structure

qc_trace/
  schemas/           # Source schemas + transforms → NormalizedMessage
    unified.py       # The central normalized schema
    claude_code/     # Claude Code JSONL parser
    codex_cli/       # Codex CLI JSONL parser
    gemini_cli/      # Gemini CLI JSON parser
    cursor/          # Cursor IDE transcript parser
  db/
    schema.sql       # PostgreSQL schema (sessions, messages, tool_calls, file_progress)
    migrations.py    # Incremental schema migrations (v1 → v4)
    connection.py    # Async connection pool (psycopg3)
    writer.py        # Batch COPY writer with duplicate handling
    reader.py        # Read queries for the dashboard API
  server/
    app.py           # HTTP server (:19777) — ingest + read API
    handlers.py      # Request handlers (ingest, sessions, file-progress, stats, feed)
    batch.py         # Batch accumulator (flush on 100 msgs or 5s)
    auth.py          # API key authentication + CORS config
  daemon/
    watcher.py       # File discovery via glob patterns
    collector.py     # Source-specific collectors with incremental processing
    pusher.py        # HTTP POST with retry queue + exponential backoff
    state.py         # Atomic state persistence
    main.py          # Poll-collect-push loop + server reconciliation
    config.py        # Daemon configuration
  cli/
    traced.py        # CLI: start, stop, status, logs, db init
dashboard/           # Vite + React + TypeScript + Tailwind
tests/               # 296 tests
docs/                # Deployment guide, review docs
docker-compose.yml   # PostgreSQL 16

API Endpoints

Method Path Auth Description
GET /health No Health check + DB connectivity
POST /ingest Yes Accept NormalizedMessage JSON array
POST /sessions Yes Upsert a session record
POST /api/file-progress Yes Report daemon file read position
GET /api/stats Yes Aggregate stats (sessions, messages, tokens, by source/type)
GET /api/sessions Yes Session list. ?source=, ?id=, ?limit=, ?offset=
GET /api/messages Yes Messages for a session. ?session_id= required
GET /api/sync Yes File sync state for daemon reconciliation
GET /api/feed Yes Latest messages across all sessions. ?since=, ?limit=

Auth is opt-in — endpoints are open when QC_TRACE_API_KEYS is not set.

Adding a New CLI Source

  1. Create qc_trace/schemas/{tool_name}/v1.py with frozen TypedDict schemas
  2. Create qc_trace/schemas/{tool_name}/transform.py returning list[NormalizedMessage]
  3. Add glob pattern to qc_trace/daemon/config.py
  4. Add collector logic to qc_trace/daemon/collector.py
  5. Add test fixtures in tests/fixtures/ and tests in tests/

Production Deployment

See docs/deployment.md for the full production deployment guide, including:

  • Environment variable reference
  • Authentication setup (API key)
  • Database configuration and connection pooling
  • Server limits and tuning
  • Daemon configuration reference
  • macOS (launchd) and Linux (systemd) service installation
  • Production checklist

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qc_trace-0.2.0.tar.gz (18.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qc_trace-0.2.0-py3-none-any.whl (76.7 kB view details)

Uploaded Python 3

File details

Details for the file qc_trace-0.2.0.tar.gz.

File metadata

  • Download URL: qc_trace-0.2.0.tar.gz
  • Upload date:
  • Size: 18.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for qc_trace-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b79b6ceb08fcb0563896a4b83830a91396e9c1e0d9e6723233b8a58a7b6b5e76
MD5 9c072325a665c54424433b916ac6f0e8
BLAKE2b-256 c64e51f923c1465ec2e6acc7ba79605d42b9362a43aa9fea6a0b7782d549474a

See more details on using hashes here.

File details

Details for the file qc_trace-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: qc_trace-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 76.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for qc_trace-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 51197711876852202315c93f19f2bad94afffe4bf919a3d0ee5f32f9aecc0ae2
MD5 9041a61f5785751b5a65cc9215c2bfec
BLAKE2b-256 6cbd8d0c8ff1239dd4e16a8dcf18059b50d2d9f160eb97afb26d68cea1fa5d0b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page