Skip to main content

Multi-CLI session tracking and normalization for QuickCall

Project description

qc-trace

A pure Python library that normalizes AI CLI session data from multiple tools into a unified schema, stores it in PostgreSQL, and provides a live dashboard to visualize the data flow.

Supported sources: Claude Code, Codex CLI, Gemini CLI, Cursor IDE

Table of Contents

Architecture

graph LR
    subgraph "Dev Laptop A (org: pratilipi)"
        A1["~/.claude/**/*.jsonl"]
        A2["~/.codex/**/*.jsonl"]
        DA["Daemon"]
    end

    subgraph "Dev Laptop B (org: pratilipi)"
        B1["~/.gemini/**/session-*.json"]
        B2["~/.cursor/**/*.txt"]
        DB2["Daemon"]
    end

    A1 & A2 --> DA
    B1 & B2 --> DB2
    DA -- "POST /ingest" --> S["Ingest Server\n:19777"]
    DB2 -- "POST /ingest" --> S
    S -- "COPY batch write" --> P[("PostgreSQL\n:5432")]
    P -- "read queries" --> S
    S -- "GET /api/*\n(?org=pratilipi)" --> UI["Dashboard\n:5173"]

Components

Component Description
Daemon (quickcall) Watches local AI tool session files, transforms them into normalized messages, pushes to the ingest server. Zero third-party dependencies.
Ingest Server HTTP server (:19777) that accepts normalized messages, batch-writes via PostgreSQL COPY, and serves the read API for the dashboard. Opt-in API key authentication.
PostgreSQL Stores sessions, messages, tool calls, tool results, token usage, and file progress. Schema auto-applied on startup (current version: v5).
Dashboard Vite + React + TypeScript + Tailwind. Overview stats, session list, message detail with expandable tool calls, thinking content, and token counts.

Data flow

  1. Daemon polls source directories every 5s for new/changed session files
  2. Source-specific collectors parse files incrementally (JSONL: line-resume, JSON/text: content-hash)
  3. Transforms normalize data into NormalizedMessage schema
  4. Pusher batches messages (500/batch) and POSTs to /ingest with retry + exponential backoff
  5. After successful push, daemon reports its read position via POST /api/file-progress
  6. Server's batch accumulator flushes to PostgreSQL via COPY (100 msgs or 5s, whichever first)
  7. On daemon restart, reconciliation compares local state against server's /api/sync endpoint

Quick Start

# 1. Start PostgreSQL
scripts/dev-db.sh start

# 2. Start the ingest server
uv run python -m qc_trace.server.app

# 3. Start the daemon
uv run quickcall start

# 4. Start the dashboard
cd dashboard && npm run dev

User Setup (Daemon Only)

For developers who use AI CLI tools and want their session data tracked. The daemon watches local session files and pushes them to the ingest server. No database, no Docker, no dashboard needed on your machine.

Install

Cloud mode (pushes to trace.quickcall.dev):

curl -fsSL https://quickcall.dev/trace/install.sh | sh -s -- <org> <api-key>

Local mode (pushes to localhost:19777, no API key needed):

curl -fsSL https://quickcall.dev/trace/install.sh | sh

Named flags also work: --org <name> --key <key>.

When an API key is provided, the daemon pushes to the cloud server. Without a key, it defaults to localhost — useful for local development or self-hosted setups.

Idempotent — safe to re-run. Re-running updates org/key settings.

What happens when you run install.sh

Running the installer on a developer's laptop takes ~30 seconds and is fully hands-off after the initial command. Here's what happens step by step:

$ curl -fsSL https://quickcall.dev/trace/install.sh | sh -s -- pratilipi <api-key>

 ░█▀█░█░█░▀█▀░█▀▀░█░█░█▀▀░█▀█░█░░░█░░
 ░█░█░█░█░░█░░█░░░█▀▄░█░░░█▀█░█░░░█░░
 ░▀▀█░▀▀▀░▀▀▀░▀▀▀░▀░▀░▀▀▀░▀░▀░▀▀▀░▀▀▀
  trace  ·  ai session collector  ·  cloud
✓ Python 3.12
✓ uv already installed (uv 0.6.6)
✓ Shell config updated (~/.zshrc)
✓ quickcall CLI installed
✓ Org set to: pratilipi
==> Installing launchd agent...              # (or systemd on Linux)
✓ launchd agent installed and started
  Data:    ~/.quickcall-trace/
==> Verifying installation...
✓ Heartbeat sent to https://trace.quickcall.dev/ingest

QuickCall Trace installed successfully!

The daemon is now watching your AI CLI sessions and pushing to:
  https://trace.quickcall.dev/ingest

Commands:
  quickcall status    # Check daemon + stats
  quickcall logs -f   # Follow daemon logs

What it does:

  1. Pre-flight — checks Python 3.11+ and curl are available
  2. Installs uv — the fast Python package manager (skipped if already installed)
  3. Configures shell — adds ~/.local/bin to PATH in .zshrc / .bashrc so quickcall works in new terminals
  4. Installs the CLIuv tool install qc-trace puts the quickcall binary in ~/.local/bin
  5. Writes org + key to config — stores {"org": "pratilipi", "api_key": "..."} in ~/.quickcall-trace/config.json
  6. Installs a background service — launchd on macOS, systemd on Linux (user-level, no root/sudo needed)
  7. Sends a heartbeat — POSTs a test message to the ingest server to verify connectivity

After install, the developer doesn't need to do anything. The daemon:

  • Starts automatically on login
  • Watches ~/.claude/, ~/.codex/, ~/.gemini/, ~/.cursor/ for AI session files
  • Pushes new messages to the central ingest server every 5 seconds
  • Auto-restarts on crash (via launchd/systemd)
  • Auto-updates itself every 5 minutes (checks PyPI, restarts to pick up new version)
  • Tags all data with the org name for filtering

No impact on the developer's workflow. The daemon is a lightweight background process (~10MB RSS) that reads session files and pushes JSON over HTTP. It does not modify any files, does not intercept any commands, and does not require any ongoing interaction.

How it works on the developer's laptop

graph TB
    subgraph "Developer's Laptop"
        subgraph "AI Tools (unchanged)"
            CC["Claude Code"]
            CX["Codex CLI"]
            GM["Gemini CLI"]
            CR["Cursor IDE"]
        end

        subgraph "Session Files (written by AI tools)"
            F1["~/.claude/projects/**/*.jsonl"]
            F2["~/.codex/sessions/**/*.jsonl"]
            F3["~/.gemini/tmp/**/session-*.json"]
            F4["~/.cursor/**/agent-transcripts/*.txt"]
        end

        CC --> F1
        CX --> F2
        GM --> F3
        CR --> F4

        subgraph "QuickCall Daemon (background service)"
            W["Watcher\n(polls every 5s)"]
            C["Collector\n(parses incrementally)"]
            P["Pusher\n(HTTP POST + retry)"]
        end

        F1 & F2 & F3 & F4 -.->|"reads"| W
        W --> C --> P

        subgraph "Local State"
            S["~/.quickcall-trace/\n  config.json (org)\n  state.json (progress)\n  push_status.json"]
        end

        C -.->|"tracks progress"| S
    end

    P -->|"POST /ingest\n(batched JSON)"| SRV["Central Ingest Server\ntrace.quickcall.dev"]
    SRV --> DB[("PostgreSQL")]
    DB --> DASH["Dashboard"]

The daemon only reads session files — it never writes to them or interferes with the AI tools. File processing is incremental: JSONL files resume from the last line read, JSON/text files re-process only when content changes (via SHA-256 hash).

CLI Commands

quickcall status         # Show daemon status, per-source stats, server health
quickcall status --json  # Machine-readable status output
quickcall logs           # View recent logs
quickcall logs -f        # Follow daemon logs
quickcall start          # Start daemon (background)
quickcall stop           # Stop daemon
quickcall setup          # Configure email and API key

Example status output

  QuickCall Trace v0.3.0
  Org: pratilipi
  Daemon: running (PID 12345) · uptime 3d 4h
  Server: https://trace.quickcall.dev/ingest ✓

  Source            Sessions    Messages   Last push
  ────────────────────────────────────────────────────
  Claude Code             12       3,847        2s ago
  Codex CLI                3         412        5s ago
  Gemini CLI               1          87        5s ago
  Cursor IDE               5       1,203        5s ago

  Total: 21 sessions · 5,549 messages

Start / Stop / Restart (local development)

# Start the daemon (runs in background)
uv run quickcall start

# Check what's happening
uv run quickcall status

# Stop it
uv run quickcall stop

# Restart (stop + start)
uv run quickcall stop && uv run quickcall start

When installed as a system service (via install.sh), the daemon starts on login and auto-restarts on crash. Use quickcall directly (no uv run).

Environment Variables

Variable Default Description
QC_TRACE_INGEST_URL https://trace.quickcall.dev/ingest Target ingest server URL
QC_TRACE_ORG (from config.json) Organization name (set by install.sh)
QC_TRACE_API_KEY (from config.json) API key sent with every request to the ingest server
QC_TRACE_MAX_FILES 0 (unlimited) Max session files to process per cycle (newest first). Useful for dev testing

Watched file patterns

Source Glob (relative to $HOME)
Claude Code .claude/projects/**/*.jsonl
Codex CLI .codex/sessions/*/*/*/rollout-*.jsonl
Gemini CLI .gemini/tmp/*/chats/session-*.json
Cursor .cursor/projects/*/agent-transcripts/*.txt

Daemon files

File Path Purpose
Config ~/.quickcall-trace/config.json Org, email, API key
State ~/.quickcall-trace/state.json Processing progress per file
Push status ~/.quickcall-trace/push_status.json Per-source push timestamps and counts
PID ~/.quickcall-trace/quickcall.pid Running daemon PID
Log ~/.quickcall-trace/quickcall.log stdout
Errors ~/.quickcall-trace/quickcall.err stderr

Developer Setup (Full Stack)

For contributors developing the daemon, ingest server, dashboard, or schema transforms.

Prerequisites

  • Python 3.11+
  • Docker (for PostgreSQL)
  • Node.js 18+ (for dashboard)
  • uv (recommended)

1. Clone and set up Python

git clone git@github.com:quickcall-dev/trace.git
cd trace
uv sync --all-extras

2. Start PostgreSQL

scripts/dev-db.sh start

Starts PostgreSQL 16 on port 5432. Schema auto-applied on first server connection. Data persists in Docker volume (qc_trace_pgdata).

Default connection: postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace

3. Start the ingest server

uv run python -m qc_trace.server.app

Starts on localhost:19777.

4. Start the daemon

uv run quickcall start

5. Start the dashboard

cd dashboard
npm install

# Local (default — connects to localhost:19777, no auth)
npm run dev

# Production (connects to trace.quickcall.dev, will prompt for admin API key)
VITE_API_URL=https://trace.quickcall.dev npm run dev

Opens at http://localhost:5173. Shows:

  • Overview — pipeline health, aggregate stats, live message feed, source distribution
  • Sessions — filterable table with drill-down
  • Session Detail — full message list with expandable tool calls, thinking content, and token counts

Quick test (without the daemon)

curl -X POST http://localhost:19777/ingest \
  -H 'Content-Type: application/json' \
  -d '[{"id":"test-1","session_id":"s1","source":"claude_code","msg_type":"user","timestamp":"2026-02-06T00:00:00Z","content":"hello world","source_schema_version":1}]'

Environment Variables

Variable Default Description
QC_TRACE_DSN postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace PostgreSQL connection string
QC_TRACE_PORT 19777 Ingest server listen port
QC_TRACE_INGEST_URL https://trace.quickcall.dev/ingest Daemon target server URL
QC_TRACE_ADMIN_KEYS (empty) Comma-separated admin API keys (full read + write access)
QC_TRACE_PUSH_KEYS (empty) Comma-separated push API keys (write-only, for daemons)
QC_TRACE_API_KEYS (empty) Legacy — treated as push keys for backwards compat
QC_TRACE_CORS_ORIGIN http://localhost:3000 Allowed CORS origin for dashboard

When both QC_TRACE_ADMIN_KEYS and QC_TRACE_PUSH_KEYS are empty, auth is disabled (all endpoints open).

Dev Testing (CLI Login & Platform)

When working on CLI login, device flow, or platform integration, you need to test against a local backend + frontend without disrupting the prod daemon running on your machine.

The problem: The installed quickcall binary talks to api.quickcall.dev (prod). The prod daemon runs via systemd/launchd. Manually swapping config between dev/prod is error-prone and breaks your running daemon.

The solution: scripts/dev-test.sh — a zero-config harness that runs an isolated dev environment alongside prod.

How it works

Prod (untouched)                    Dev (isolated)
~/.quickcall-trace/                 ~/.quickcall-trace-dev/
├── config.json → api.quickcall.dev ├── config.json → localhost:8000
├── .device_id                      ├── .device_id (copied from prod)
├── state.json                      └── (fresh state)
└── quickcall.pid
                                    Runs from source (uv run)
systemd daemon: running ✓           No daemon by default

Quick start

# See dev vs prod config side by side
./scripts/dev-test.sh

# Login against local backend (opens localhost:3000, not app.quickcall.dev)
./scripts/dev-test.sh login

# Check status in dev mode
./scripts/dev-test.sh status

# Start a dev daemon (isolated from prod daemon)
./scripts/dev-test.sh start

# Run any quickcall command in dev mode
./scripts/dev-test.sh <command>

# Drop into a shell where 'quickcall' is aliased to run from source
./scripts/dev-test.sh --shell

# Wipe dev config and start fresh
./scripts/dev-test.sh --clean

Prerequisites

Before running dev-test.sh, start the local services:

# 1. Local database (port 25432)
cd ~/quickcall.dev && /start-local-db

# 2. Backend (port 8000)
cd ~/quickcall.dev/backend-quickcall-dev && uvicorn main:app --port 8000

# 3. Frontend (port 3000)
cd ~/quickcall.dev/frontend-quickcall-dev && pnpm dev

The script checks service health and warns you if anything is down.

Session limit

By default, the dev daemon only processes the 2 most recently modified session files — not your entire history. This prevents flooding your local ingest server on first start.

Override with QC_TRACE_MAX_FILES:

# Push only the 5 latest sessions
QC_TRACE_MAX_FILES=5 ./scripts/dev-test.sh start

# Push everything (same as prod behavior)
QC_TRACE_MAX_FILES=0 ./scripts/dev-test.sh start

This is powered by the max_files config option (see below).

Environment variables

Override dev URLs if your services run on different ports:

Variable Default Description
QC_DEV_PLATFORM_URL http://localhost:8000 Backend API URL
QC_DEV_INGEST_URL http://localhost:19777/ingest Ingest server URL
QC_DEV_FRONTEND_URL http://localhost:3000 Frontend URL

What it sets under the hood

QC_TRACE_CONFIG_DIR=~/.quickcall-trace-dev   # Isolated config directory
QC_PLATFORM_URL=http://localhost:8000         # Local backend
QC_TRACE_INGEST_URL=http://localhost:19777    # Local ingest
QC_TRACE_MAX_FILES=2                         # Only 2 latest sessions

Troubleshooting

Dashboard shows 0 sessions after a restart

Postgres data is lost if the Docker volume doesn't survive reboot, but the daemon's state file (~/.quickcall-trace/state.json) still has files marked as processed.

Fix: reset the state file and restart the daemon.

rm ~/.quickcall-trace/state.json
quickcall stop
quickcall start

This is always safe — the writer uses ON CONFLICT DO NOTHING so duplicate messages are silently skipped.

Daemon/server line mismatch

The daemon tracks its actual read position via file_progress (separate from message storage). On startup, reconciliation compares local state against the server and rewinds if needed. If you suspect mismatches:

# Check server's view of file progress
curl http://localhost:19777/api/sync

Development

Run tests

# All 296 tests
uv run pytest tests/ -v

# Single file
uv run pytest tests/test_transforms.py

# With coverage
uv run pytest tests/ --cov=qc_trace --cov-report=html

Project structure

qc_trace/
  schemas/           # Source schemas + transforms → NormalizedMessage
    unified.py       # The central normalized schema
    claude_code/     # Claude Code JSONL parser
    codex_cli/       # Codex CLI JSONL parser
    gemini_cli/      # Gemini CLI JSON parser
    cursor/          # Cursor IDE transcript parser
  db/
    schema.sql       # PostgreSQL schema (sessions, messages, tool_calls, file_progress)
    migrations.py    # Incremental schema migrations (v1 → v5)
    connection.py    # Async connection pool (psycopg3)
    writer.py        # Batch COPY writer with duplicate handling
    reader.py        # Read queries for the dashboard API
  server/
    app.py           # HTTP server (:19777) — ingest + read API
    handlers.py      # Request handlers (ingest, sessions, file-progress, stats, feed)
    batch.py         # Batch accumulator (flush on 100 msgs or 5s)
    auth.py          # API key authentication + CORS config
  daemon/
    watcher.py       # File discovery via glob patterns
    collector.py     # Source-specific collectors with incremental processing
    pusher.py        # HTTP POST with retry queue + exponential backoff
    state.py         # Atomic state persistence
    main.py          # Poll-collect-push loop + server reconciliation + auto-update
    config.py        # Daemon configuration (org, globs, retry settings)
    push_status.py   # Per-source push timestamps for CLI status
  cli/
    traced.py        # CLI: start, stop, status, logs, db init
dashboard/           # Vite + React + TypeScript + Tailwind
tests/               # 296 tests
docs/                # Deployment guide, review docs
docker-compose.yml   # PostgreSQL 16

API Endpoints

Method Path Auth Description
GET /health Public Health check + DB connectivity
GET /api/latest-version Public Latest daemon version
POST /ingest Push / Admin Accept NormalizedMessage JSON array
POST /sessions Push / Admin Upsert a session record
POST /api/file-progress Push / Admin Report daemon file read position
GET /api/sync Push / Admin File sync state for daemon reconciliation
GET /api/stats Admin Aggregate stats (sessions, messages, tokens, by source/type). ?org=
GET /api/sessions Admin Session list. ?source=, ?id=, ?org=, ?limit=, ?offset=
GET /api/messages Admin Messages for a session. ?session_id= required
GET /api/feed Admin Latest messages across all sessions. ?since=, ?org=, ?limit=

Auth is two-tier: push keys can write data (for daemons), admin keys can read + write (for dashboard/API). Auth is disabled when no keys are configured.

Adding a New CLI Source

  1. Create qc_trace/schemas/{tool_name}/v1.py with frozen TypedDict schemas
  2. Create qc_trace/schemas/{tool_name}/transform.py returning list[NormalizedMessage]
  3. Add glob pattern to qc_trace/daemon/config.py
  4. Add collector logic to qc_trace/daemon/collector.py
  5. Add test fixtures in tests/fixtures/ and tests in tests/

Production Deployment

See docs/deployment.md for the full production deployment guide, including:

  • Environment variable reference
  • Authentication setup (API key)
  • Database configuration and connection pooling
  • Server limits and tuning
  • Daemon configuration reference
  • macOS (launchd) and Linux (systemd) service installation
  • Production checklist

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qc_trace-0.4.62.tar.gz (404.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qc_trace-0.4.62-py3-none-any.whl (125.5 kB view details)

Uploaded Python 3

File details

Details for the file qc_trace-0.4.62.tar.gz.

File metadata

  • Download URL: qc_trace-0.4.62.tar.gz
  • Upload date:
  • Size: 404.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for qc_trace-0.4.62.tar.gz
Algorithm Hash digest
SHA256 6d79b380e951ff0e7334ec75789a5040ce501ccf2be32bf27c4390a08b27997a
MD5 ee4ae73332e610bde57dc16e9bed1e42
BLAKE2b-256 a89291b2e0f7f26205235d5f7018b7e00987a515faba3cd0c0e4f3487c0fe3e3

See more details on using hashes here.

File details

Details for the file qc_trace-0.4.62-py3-none-any.whl.

File metadata

  • Download URL: qc_trace-0.4.62-py3-none-any.whl
  • Upload date:
  • Size: 125.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for qc_trace-0.4.62-py3-none-any.whl
Algorithm Hash digest
SHA256 1a9666f808d98d36afeab1ab4320d803014ade633002ceafd5e6ebf90b259e57
MD5 471e8256426f3dab0af67a3ebd1057ec
BLAKE2b-256 048090cb0deb1b79efbe84a3497740b17fe11d46ad72380a9a373269ef01945a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page