Skip to main content

Multi-CLI session tracking and normalization for QuickCall

Project description

qc-trace

A pure Python library that normalizes AI CLI session data from multiple tools into a unified schema, stores it in PostgreSQL, and provides a live dashboard to visualize the data flow.

Supported sources: Claude Code, Codex CLI, Gemini CLI, Cursor IDE

Table of Contents

Architecture

graph LR
    subgraph "Dev Laptop A (org: pratilipi)"
        A1["~/.claude/**/*.jsonl"]
        A2["~/.codex/**/*.jsonl"]
        DA["Daemon"]
    end

    subgraph "Dev Laptop B (org: pratilipi)"
        B1["~/.gemini/**/session-*.json"]
        B2["~/.cursor/**/*.txt"]
        DB2["Daemon"]
    end

    A1 & A2 --> DA
    B1 & B2 --> DB2
    DA -- "POST /ingest" --> S["Ingest Server\n:19777"]
    DB2 -- "POST /ingest" --> S
    S -- "COPY batch write" --> P[("PostgreSQL\n:5432")]
    P -- "read queries" --> S
    S -- "GET /api/*\n(?org=pratilipi)" --> UI["Dashboard\n:5173"]

Components

Component Description
Daemon (quickcall) Watches local AI tool session files, transforms them into normalized messages, pushes to the ingest server. Zero third-party dependencies.
Ingest Server HTTP server (:19777) that accepts normalized messages, batch-writes via PostgreSQL COPY, and serves the read API for the dashboard. Opt-in API key authentication.
PostgreSQL Stores sessions, messages, tool calls, tool results, token usage, and file progress. Schema auto-applied on startup (current version: v5).
Dashboard Vite + React + TypeScript + Tailwind. Overview stats, session list, message detail with expandable tool calls, thinking content, and token counts.

Data flow

  1. Daemon polls source directories every 5s for new/changed session files
  2. Source-specific collectors parse files incrementally (JSONL: line-resume, JSON/text: content-hash)
  3. Transforms normalize data into NormalizedMessage schema
  4. Pusher batches messages (500/batch) and POSTs to /ingest with retry + exponential backoff
  5. After successful push, daemon reports its read position via POST /api/file-progress
  6. Server's batch accumulator flushes to PostgreSQL via COPY (100 msgs or 5s, whichever first)
  7. On daemon restart, reconciliation compares local state against server's /api/sync endpoint

Quick Start

# 1. Start PostgreSQL
scripts/dev-db.sh start

# 2. Start the ingest server
uv run python -m qc_trace.server.app

# 3. Start the daemon
uv run quickcall start

# 4. Start the dashboard
cd dashboard && npm run dev

User Setup (Daemon Only)

For developers who use AI CLI tools and want their session data tracked. The daemon watches local session files and pushes them to the ingest server. No database, no Docker, no dashboard needed on your machine.

Install

Cloud mode (pushes to trace.quickcall.dev):

curl -fsSL https://quickcall.dev/trace/install.sh | sh -s -- <org> <api-key>

Local mode (pushes to localhost:19777, no API key needed):

curl -fsSL https://quickcall.dev/trace/install.sh | sh

Named flags also work: --org <name> --key <key>.

When an API key is provided, the daemon pushes to the cloud server. Without a key, it defaults to localhost — useful for local development or self-hosted setups.

Idempotent — safe to re-run. Re-running updates org/key settings.

What happens when you run install.sh

Running the installer on a developer's laptop takes ~30 seconds and is fully hands-off after the initial command. Here's what happens step by step:

$ curl -fsSL https://quickcall.dev/trace/install.sh | sh -s -- pratilipi <api-key>

 ░█▀█░█░█░▀█▀░█▀▀░█░█░█▀▀░█▀█░█░░░█░░
 ░█░█░█░█░░█░░█░░░█▀▄░█░░░█▀█░█░░░█░░
 ░▀▀█░▀▀▀░▀▀▀░▀▀▀░▀░▀░▀▀▀░▀░▀░▀▀▀░▀▀▀
  trace  ·  ai session collector  ·  cloud
✓ Python 3.12
✓ uv already installed (uv 0.6.6)
✓ Shell config updated (~/.zshrc)
✓ quickcall CLI installed
✓ Org set to: pratilipi
==> Installing launchd agent...              # (or systemd on Linux)
✓ launchd agent installed and started
  Data:    ~/.quickcall-trace/
==> Verifying installation...
✓ Heartbeat sent to https://trace.quickcall.dev/ingest

QuickCall Trace installed successfully!

The daemon is now watching your AI CLI sessions and pushing to:
  https://trace.quickcall.dev/ingest

Commands:
  quickcall status    # Check daemon + stats
  quickcall logs -f   # Follow daemon logs

What it does:

  1. Pre-flight — checks Python 3.11+ and curl are available
  2. Installs uv — the fast Python package manager (skipped if already installed)
  3. Configures shell — adds ~/.local/bin to PATH in .zshrc / .bashrc so quickcall works in new terminals
  4. Installs the CLIuv tool install qc-trace puts the quickcall binary in ~/.local/bin
  5. Writes org + key to config — stores {"org": "pratilipi", "api_key": "..."} in ~/.quickcall-trace/config.json
  6. Installs a background service — launchd on macOS, systemd on Linux (user-level, no root/sudo needed)
  7. Sends a heartbeat — POSTs a test message to the ingest server to verify connectivity

After install, the developer doesn't need to do anything. The daemon:

  • Starts automatically on login
  • Watches ~/.claude/, ~/.codex/, ~/.gemini/, ~/.cursor/ for AI session files
  • Pushes new messages to the central ingest server every 5 seconds
  • Auto-restarts on crash (via launchd/systemd)
  • Auto-updates itself every 5 minutes (checks PyPI, restarts to pick up new version)
  • Tags all data with the org name for filtering

No impact on the developer's workflow. The daemon is a lightweight background process (~10MB RSS) that reads session files and pushes JSON over HTTP. It does not modify any files, does not intercept any commands, and does not require any ongoing interaction.

How it works on the developer's laptop

graph TB
    subgraph "Developer's Laptop"
        subgraph "AI Tools (unchanged)"
            CC["Claude Code"]
            CX["Codex CLI"]
            GM["Gemini CLI"]
            CR["Cursor IDE"]
        end

        subgraph "Session Files (written by AI tools)"
            F1["~/.claude/projects/**/*.jsonl"]
            F2["~/.codex/sessions/**/*.jsonl"]
            F3["~/.gemini/tmp/**/session-*.json"]
            F4["~/.cursor/**/agent-transcripts/*.txt"]
        end

        CC --> F1
        CX --> F2
        GM --> F3
        CR --> F4

        subgraph "QuickCall Daemon (background service)"
            W["Watcher\n(polls every 5s)"]
            C["Collector\n(parses incrementally)"]
            P["Pusher\n(HTTP POST + retry)"]
        end

        F1 & F2 & F3 & F4 -.->|"reads"| W
        W --> C --> P

        subgraph "Local State"
            S["~/.quickcall-trace/\n  config.json (org)\n  state.json (progress)\n  push_status.json"]
        end

        C -.->|"tracks progress"| S
    end

    P -->|"POST /ingest\n(batched JSON)"| SRV["Central Ingest Server\ntrace.quickcall.dev"]
    SRV --> DB[("PostgreSQL")]
    DB --> DASH["Dashboard"]

The daemon only reads session files — it never writes to them or interferes with the AI tools. File processing is incremental: JSONL files resume from the last line read, JSON/text files re-process only when content changes (via SHA-256 hash).

CLI Commands

quickcall status         # Show daemon status, per-source stats, server health
quickcall status --json  # Machine-readable status output
quickcall logs           # View recent logs
quickcall logs -f        # Follow daemon logs
quickcall start          # Start daemon (background)
quickcall stop           # Stop daemon
quickcall setup          # Configure email and API key

Example status output

  QuickCall Trace v0.3.0
  Org: pratilipi
  Daemon: running (PID 12345) · uptime 3d 4h
  Server: https://trace.quickcall.dev/ingest ✓

  Source            Sessions    Messages   Last push
  ────────────────────────────────────────────────────
  Claude Code             12       3,847        2s ago
  Codex CLI                3         412        5s ago
  Gemini CLI               1          87        5s ago
  Cursor IDE               5       1,203        5s ago

  Total: 21 sessions · 5,549 messages

Start / Stop / Restart (local development)

# Start the daemon (runs in background)
uv run quickcall start

# Check what's happening
uv run quickcall status

# Stop it
uv run quickcall stop

# Restart (stop + start)
uv run quickcall stop && uv run quickcall start

When installed as a system service (via install.sh), the daemon starts on login and auto-restarts on crash. Use quickcall directly (no uv run).

Environment Variables

Variable Default Description
QC_TRACE_INGEST_URL https://trace.quickcall.dev/ingest Target ingest server URL
QC_TRACE_ORG (from config.json) Organization name (set by install.sh)
QC_TRACE_API_KEY (from config.json) API key sent with every request to the ingest server

Watched file patterns

Source Glob (relative to $HOME)
Claude Code .claude/projects/**/*.jsonl
Codex CLI .codex/sessions/*/*/*/rollout-*.jsonl
Gemini CLI .gemini/tmp/*/chats/session-*.json
Cursor .cursor/projects/*/agent-transcripts/*.txt

Daemon files

File Path Purpose
Config ~/.quickcall-trace/config.json Org, email, API key
State ~/.quickcall-trace/state.json Processing progress per file
Push status ~/.quickcall-trace/push_status.json Per-source push timestamps and counts
PID ~/.quickcall-trace/quickcall.pid Running daemon PID
Log ~/.quickcall-trace/quickcall.log stdout
Errors ~/.quickcall-trace/quickcall.err stderr

Developer Setup (Full Stack)

For contributors developing the daemon, ingest server, dashboard, or schema transforms.

Prerequisites

  • Python 3.11+
  • Docker (for PostgreSQL)
  • Node.js 18+ (for dashboard)
  • uv (recommended)

1. Clone and set up Python

git clone git@github.com:quickcall-dev/trace.git
cd trace
uv sync --all-extras

2. Start PostgreSQL

scripts/dev-db.sh start

Starts PostgreSQL 16 on port 5432. Schema auto-applied on first server connection. Data persists in Docker volume (qc_trace_pgdata).

Default connection: postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace

3. Start the ingest server

uv run python -m qc_trace.server.app

Starts on localhost:19777.

4. Start the daemon

uv run quickcall start

5. Start the dashboard

cd dashboard
npm install

# Local (default — connects to localhost:19777, no auth)
npm run dev

# Production (connects to trace.quickcall.dev, will prompt for admin API key)
VITE_API_URL=https://trace.quickcall.dev npm run dev

Opens at http://localhost:5173. Shows:

  • Overview — pipeline health, aggregate stats, live message feed, source distribution
  • Sessions — filterable table with drill-down
  • Session Detail — full message list with expandable tool calls, thinking content, and token counts

Quick test (without the daemon)

curl -X POST http://localhost:19777/ingest \
  -H 'Content-Type: application/json' \
  -d '[{"id":"test-1","session_id":"s1","source":"claude_code","msg_type":"user","timestamp":"2026-02-06T00:00:00Z","content":"hello world","source_schema_version":1}]'

Environment Variables

Variable Default Description
QC_TRACE_DSN postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace PostgreSQL connection string
QC_TRACE_PORT 19777 Ingest server listen port
QC_TRACE_INGEST_URL https://trace.quickcall.dev/ingest Daemon target server URL
QC_TRACE_ADMIN_KEYS (empty) Comma-separated admin API keys (full read + write access)
QC_TRACE_PUSH_KEYS (empty) Comma-separated push API keys (write-only, for daemons)
QC_TRACE_API_KEYS (empty) Legacy — treated as push keys for backwards compat
QC_TRACE_CORS_ORIGIN http://localhost:3000 Allowed CORS origin for dashboard

When both QC_TRACE_ADMIN_KEYS and QC_TRACE_PUSH_KEYS are empty, auth is disabled (all endpoints open).


Troubleshooting

Dashboard shows 0 sessions after a restart

Postgres data is lost if the Docker volume doesn't survive reboot, but the daemon's state file (~/.quickcall-trace/state.json) still has files marked as processed.

Fix: reset the state file and restart the daemon.

rm ~/.quickcall-trace/state.json
quickcall stop
quickcall start

This is always safe — the writer uses ON CONFLICT DO NOTHING so duplicate messages are silently skipped.

Daemon/server line mismatch

The daemon tracks its actual read position via file_progress (separate from message storage). On startup, reconciliation compares local state against the server and rewinds if needed. If you suspect mismatches:

# Check server's view of file progress
curl http://localhost:19777/api/sync

Development

Run tests

# All 296 tests
uv run pytest tests/ -v

# Single file
uv run pytest tests/test_transforms.py

# With coverage
uv run pytest tests/ --cov=qc_trace --cov-report=html

Project structure

qc_trace/
  schemas/           # Source schemas + transforms → NormalizedMessage
    unified.py       # The central normalized schema
    claude_code/     # Claude Code JSONL parser
    codex_cli/       # Codex CLI JSONL parser
    gemini_cli/      # Gemini CLI JSON parser
    cursor/          # Cursor IDE transcript parser
  db/
    schema.sql       # PostgreSQL schema (sessions, messages, tool_calls, file_progress)
    migrations.py    # Incremental schema migrations (v1 → v5)
    connection.py    # Async connection pool (psycopg3)
    writer.py        # Batch COPY writer with duplicate handling
    reader.py        # Read queries for the dashboard API
  server/
    app.py           # HTTP server (:19777) — ingest + read API
    handlers.py      # Request handlers (ingest, sessions, file-progress, stats, feed)
    batch.py         # Batch accumulator (flush on 100 msgs or 5s)
    auth.py          # API key authentication + CORS config
  daemon/
    watcher.py       # File discovery via glob patterns
    collector.py     # Source-specific collectors with incremental processing
    pusher.py        # HTTP POST with retry queue + exponential backoff
    state.py         # Atomic state persistence
    main.py          # Poll-collect-push loop + server reconciliation + auto-update
    config.py        # Daemon configuration (org, globs, retry settings)
    push_status.py   # Per-source push timestamps for CLI status
  cli/
    traced.py        # CLI: start, stop, status, logs, db init
dashboard/           # Vite + React + TypeScript + Tailwind
tests/               # 296 tests
docs/                # Deployment guide, review docs
docker-compose.yml   # PostgreSQL 16

API Endpoints

Method Path Auth Description
GET /health Public Health check + DB connectivity
GET /api/latest-version Public Latest daemon version
POST /ingest Push / Admin Accept NormalizedMessage JSON array
POST /sessions Push / Admin Upsert a session record
POST /api/file-progress Push / Admin Report daemon file read position
GET /api/sync Push / Admin File sync state for daemon reconciliation
GET /api/stats Admin Aggregate stats (sessions, messages, tokens, by source/type). ?org=
GET /api/sessions Admin Session list. ?source=, ?id=, ?org=, ?limit=, ?offset=
GET /api/messages Admin Messages for a session. ?session_id= required
GET /api/feed Admin Latest messages across all sessions. ?since=, ?org=, ?limit=

Auth is two-tier: push keys can write data (for daemons), admin keys can read + write (for dashboard/API). Auth is disabled when no keys are configured.

Adding a New CLI Source

  1. Create qc_trace/schemas/{tool_name}/v1.py with frozen TypedDict schemas
  2. Create qc_trace/schemas/{tool_name}/transform.py returning list[NormalizedMessage]
  3. Add glob pattern to qc_trace/daemon/config.py
  4. Add collector logic to qc_trace/daemon/collector.py
  5. Add test fixtures in tests/fixtures/ and tests in tests/

Production Deployment

See docs/deployment.md for the full production deployment guide, including:

  • Environment variable reference
  • Authentication setup (API key)
  • Database configuration and connection pooling
  • Server limits and tuning
  • Daemon configuration reference
  • macOS (launchd) and Linux (systemd) service installation
  • Production checklist

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qc_trace-0.4.11.tar.gz (18.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qc_trace-0.4.11-py3-none-any.whl (81.8 kB view details)

Uploaded Python 3

File details

Details for the file qc_trace-0.4.11.tar.gz.

File metadata

  • Download URL: qc_trace-0.4.11.tar.gz
  • Upload date:
  • Size: 18.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for qc_trace-0.4.11.tar.gz
Algorithm Hash digest
SHA256 d2c69ec6b13bc739183af5e3638bf7f389fa64385e95678d503e47ad9bb16a9f
MD5 2c6c450eb85d2110e2de852d19455db6
BLAKE2b-256 5f4b6b44e280c134bc0455027351dcfaf76de461bf81867bc6c6d0a57f4b041b

See more details on using hashes here.

File details

Details for the file qc_trace-0.4.11-py3-none-any.whl.

File metadata

  • Download URL: qc_trace-0.4.11-py3-none-any.whl
  • Upload date:
  • Size: 81.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for qc_trace-0.4.11-py3-none-any.whl
Algorithm Hash digest
SHA256 e85beb2a81ae38d647e41fc893b53cc965e21e436d35c6512391fa83fc697a09
MD5 0fdec9ca0ea7db5c9e2feb5f1010a2b8
BLAKE2b-256 6463e6692c8053e9863b4b455d34368b0afbad60dcf13cc7fd01e9a1b065869b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page