Multi-CLI session tracking and normalization for QuickCall
Project description
qc-trace
A pure Python library that normalizes AI CLI session data from multiple tools into a unified schema, stores it in PostgreSQL, and provides a live dashboard to visualize the data flow.
Supported sources: Claude Code, Codex CLI, Gemini CLI, Cursor IDE
Table of Contents
- Architecture
- Quick Start
- User Setup (Daemon Only)
- Developer Setup (Full Stack)
- Troubleshooting
- Development
- Adding a New CLI Source
- Production Deployment
Architecture
graph LR
subgraph "Dev Laptop A (org: pratilipi)"
A1["~/.claude/**/*.jsonl"]
A2["~/.codex/**/*.jsonl"]
DA["Daemon"]
end
subgraph "Dev Laptop B (org: pratilipi)"
B1["~/.gemini/**/session-*.json"]
B2["~/.cursor/**/*.txt"]
DB2["Daemon"]
end
A1 & A2 --> DA
B1 & B2 --> DB2
DA -- "POST /ingest" --> S["Ingest Server\n:19777"]
DB2 -- "POST /ingest" --> S
S -- "COPY batch write" --> P[("PostgreSQL\n:5432")]
P -- "read queries" --> S
S -- "GET /api/*\n(?org=pratilipi)" --> UI["Dashboard\n:5173"]
Components
| Component | Description |
|---|---|
Daemon (quickcall) |
Watches local AI tool session files, transforms them into normalized messages, pushes to the ingest server. Zero third-party dependencies. |
| Ingest Server | HTTP server (:19777) that accepts normalized messages, batch-writes via PostgreSQL COPY, and serves the read API for the dashboard. Opt-in API key authentication. |
| PostgreSQL | Stores sessions, messages, tool calls, tool results, token usage, and file progress. Schema auto-applied on startup (current version: v5). |
| Dashboard | Vite + React + TypeScript + Tailwind. Overview stats, session list, message detail with expandable tool calls, thinking content, and token counts. |
Data flow
- Daemon polls source directories every 5s for new/changed session files
- Source-specific collectors parse files incrementally (JSONL: line-resume, JSON/text: content-hash)
- Transforms normalize data into
NormalizedMessageschema - Pusher batches messages (500/batch) and POSTs to
/ingestwith retry + exponential backoff - After successful push, daemon reports its read position via
POST /api/file-progress - Server's batch accumulator flushes to PostgreSQL via COPY (100 msgs or 5s, whichever first)
- On daemon restart, reconciliation compares local state against server's
/api/syncendpoint
Quick Start
# 1. Start PostgreSQL
scripts/dev-db.sh start
# 2. Start the ingest server
uv run python -m qc_trace.server.app
# 3. Start the daemon
uv run quickcall start
# 4. Start the dashboard
cd dashboard && npm run dev
User Setup (Daemon Only)
For developers who use AI CLI tools and want their session data tracked. The daemon watches local session files and pushes them to the ingest server. No database, no Docker, no dashboard needed on your machine.
Install
Cloud mode (pushes to trace.quickcall.dev):
curl -fsSL https://quickcall.dev/trace/install.sh | sh -s -- <org> <api-key>
Local mode (pushes to localhost:19777, no API key needed):
curl -fsSL https://quickcall.dev/trace/install.sh | sh
Named flags also work: --org <name> --key <key>.
When an API key is provided, the daemon pushes to the cloud server. Without a key, it defaults to localhost — useful for local development or self-hosted setups.
Idempotent — safe to re-run. Re-running updates org/key settings.
What happens when you run install.sh
Running the installer on a developer's laptop takes ~30 seconds and is fully hands-off after the initial command. Here's what happens step by step:
$ curl -fsSL https://quickcall.dev/trace/install.sh | sh -s -- pratilipi <api-key>
░█▀█░█░█░▀█▀░█▀▀░█░█░█▀▀░█▀█░█░░░█░░
░█░█░█░█░░█░░█░░░█▀▄░█░░░█▀█░█░░░█░░
░▀▀█░▀▀▀░▀▀▀░▀▀▀░▀░▀░▀▀▀░▀░▀░▀▀▀░▀▀▀
trace · ai session collector · cloud
✓ Python 3.12
✓ uv already installed (uv 0.6.6)
✓ Shell config updated (~/.zshrc)
✓ quickcall CLI installed
✓ Org set to: pratilipi
==> Installing launchd agent... # (or systemd on Linux)
✓ launchd agent installed and started
Data: ~/.quickcall-trace/
==> Verifying installation...
✓ Heartbeat sent to https://trace.quickcall.dev/ingest
QuickCall Trace installed successfully!
The daemon is now watching your AI CLI sessions and pushing to:
https://trace.quickcall.dev/ingest
Commands:
quickcall status # Check daemon + stats
quickcall logs -f # Follow daemon logs
What it does:
- Pre-flight — checks Python 3.11+ and curl are available
- Installs uv — the fast Python package manager (skipped if already installed)
- Configures shell — adds
~/.local/binto PATH in.zshrc/.bashrcsoquickcallworks in new terminals - Installs the CLI —
uv tool install qc-traceputs thequickcallbinary in~/.local/bin - Writes org + key to config — stores
{"org": "pratilipi", "api_key": "..."}in~/.quickcall-trace/config.json - Installs a background service — launchd on macOS, systemd on Linux (user-level, no root/sudo needed)
- Sends a heartbeat — POSTs a test message to the ingest server to verify connectivity
After install, the developer doesn't need to do anything. The daemon:
- Starts automatically on login
- Watches
~/.claude/,~/.codex/,~/.gemini/,~/.cursor/for AI session files - Pushes new messages to the central ingest server every 5 seconds
- Auto-restarts on crash (via launchd/systemd)
- Auto-updates itself every 5 minutes (checks PyPI, restarts to pick up new version)
- Tags all data with the org name for filtering
No impact on the developer's workflow. The daemon is a lightweight background process (~10MB RSS) that reads session files and pushes JSON over HTTP. It does not modify any files, does not intercept any commands, and does not require any ongoing interaction.
How it works on the developer's laptop
graph TB
subgraph "Developer's Laptop"
subgraph "AI Tools (unchanged)"
CC["Claude Code"]
CX["Codex CLI"]
GM["Gemini CLI"]
CR["Cursor IDE"]
end
subgraph "Session Files (written by AI tools)"
F1["~/.claude/projects/**/*.jsonl"]
F2["~/.codex/sessions/**/*.jsonl"]
F3["~/.gemini/tmp/**/session-*.json"]
F4["~/.cursor/**/agent-transcripts/*.txt"]
end
CC --> F1
CX --> F2
GM --> F3
CR --> F4
subgraph "QuickCall Daemon (background service)"
W["Watcher\n(polls every 5s)"]
C["Collector\n(parses incrementally)"]
P["Pusher\n(HTTP POST + retry)"]
end
F1 & F2 & F3 & F4 -.->|"reads"| W
W --> C --> P
subgraph "Local State"
S["~/.quickcall-trace/\n config.json (org)\n state.json (progress)\n push_status.json"]
end
C -.->|"tracks progress"| S
end
P -->|"POST /ingest\n(batched JSON)"| SRV["Central Ingest Server\ntrace.quickcall.dev"]
SRV --> DB[("PostgreSQL")]
DB --> DASH["Dashboard"]
The daemon only reads session files — it never writes to them or interferes with the AI tools. File processing is incremental: JSONL files resume from the last line read, JSON/text files re-process only when content changes (via SHA-256 hash).
CLI Commands
quickcall status # Show daemon status, per-source stats, server health
quickcall status --json # Machine-readable status output
quickcall logs # View recent logs
quickcall logs -f # Follow daemon logs
quickcall start # Start daemon (background)
quickcall stop # Stop daemon
quickcall setup # Configure email and API key
Example status output
QuickCall Trace v0.3.0
Org: pratilipi
Daemon: running (PID 12345) · uptime 3d 4h
Server: https://trace.quickcall.dev/ingest ✓
Source Sessions Messages Last push
────────────────────────────────────────────────────
Claude Code 12 3,847 2s ago
Codex CLI 3 412 5s ago
Gemini CLI 1 87 5s ago
Cursor IDE 5 1,203 5s ago
Total: 21 sessions · 5,549 messages
Start / Stop / Restart (local development)
# Start the daemon (runs in background)
uv run quickcall start
# Check what's happening
uv run quickcall status
# Stop it
uv run quickcall stop
# Restart (stop + start)
uv run quickcall stop && uv run quickcall start
When installed as a system service (via install.sh), the daemon starts on login and auto-restarts on crash. Use quickcall directly (no uv run).
Environment Variables
| Variable | Default | Description |
|---|---|---|
QC_TRACE_INGEST_URL |
https://trace.quickcall.dev/ingest |
Target ingest server URL |
QC_TRACE_ORG |
(from config.json) | Organization name (set by install.sh) |
QC_TRACE_API_KEY |
(from config.json) | API key sent with every request to the ingest server |
Watched file patterns
| Source | Glob (relative to $HOME) |
|---|---|
| Claude Code | .claude/projects/**/*.jsonl |
| Codex CLI | .codex/sessions/*/*/*/rollout-*.jsonl |
| Gemini CLI | .gemini/tmp/*/chats/session-*.json |
| Cursor | .cursor/projects/*/agent-transcripts/*.txt |
Daemon files
| File | Path | Purpose |
|---|---|---|
| Config | ~/.quickcall-trace/config.json |
Org, email, API key |
| State | ~/.quickcall-trace/state.json |
Processing progress per file |
| Push status | ~/.quickcall-trace/push_status.json |
Per-source push timestamps and counts |
| PID | ~/.quickcall-trace/quickcall.pid |
Running daemon PID |
| Log | ~/.quickcall-trace/quickcall.log |
stdout |
| Errors | ~/.quickcall-trace/quickcall.err |
stderr |
Developer Setup (Full Stack)
For contributors developing the daemon, ingest server, dashboard, or schema transforms.
Prerequisites
- Python 3.11+
- Docker (for PostgreSQL)
- Node.js 18+ (for dashboard)
- uv (recommended)
1. Clone and set up Python
git clone git@github.com:quickcall-dev/trace.git
cd trace
uv sync --all-extras
2. Start PostgreSQL
scripts/dev-db.sh start
Starts PostgreSQL 16 on port 5432. Schema auto-applied on first server connection. Data persists in Docker volume (qc_trace_pgdata).
Default connection: postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace
3. Start the ingest server
uv run python -m qc_trace.server.app
Starts on localhost:19777.
4. Start the daemon
uv run quickcall start
5. Start the dashboard
cd dashboard
npm install
# Local (default — connects to localhost:19777, no auth)
npm run dev
# Production (connects to trace.quickcall.dev, will prompt for admin API key)
VITE_API_URL=https://trace.quickcall.dev npm run dev
Opens at http://localhost:5173. Shows:
- Overview — pipeline health, aggregate stats, live message feed, source distribution
- Sessions — filterable table with drill-down
- Session Detail — full message list with expandable tool calls, thinking content, and token counts
Quick test (without the daemon)
curl -X POST http://localhost:19777/ingest \
-H 'Content-Type: application/json' \
-d '[{"id":"test-1","session_id":"s1","source":"claude_code","msg_type":"user","timestamp":"2026-02-06T00:00:00Z","content":"hello world","source_schema_version":1}]'
Environment Variables
| Variable | Default | Description |
|---|---|---|
QC_TRACE_DSN |
postgresql://qc_trace:qc_trace_dev@localhost:5432/qc_trace |
PostgreSQL connection string |
QC_TRACE_PORT |
19777 |
Ingest server listen port |
QC_TRACE_INGEST_URL |
https://trace.quickcall.dev/ingest |
Daemon target server URL |
QC_TRACE_ADMIN_KEYS |
(empty) | Comma-separated admin API keys (full read + write access) |
QC_TRACE_PUSH_KEYS |
(empty) | Comma-separated push API keys (write-only, for daemons) |
QC_TRACE_API_KEYS |
(empty) | Legacy — treated as push keys for backwards compat |
QC_TRACE_CORS_ORIGIN |
http://localhost:3000 |
Allowed CORS origin for dashboard |
When both QC_TRACE_ADMIN_KEYS and QC_TRACE_PUSH_KEYS are empty, auth is disabled (all endpoints open).
Troubleshooting
Dashboard shows 0 sessions after a restart
Postgres data is lost if the Docker volume doesn't survive reboot, but the daemon's state file (~/.quickcall-trace/state.json) still has files marked as processed.
Fix: reset the state file and restart the daemon.
rm ~/.quickcall-trace/state.json
quickcall stop
quickcall start
This is always safe — the writer uses ON CONFLICT DO NOTHING so duplicate messages are silently skipped.
Daemon/server line mismatch
The daemon tracks its actual read position via file_progress (separate from message storage). On startup, reconciliation compares local state against the server and rewinds if needed. If you suspect mismatches:
# Check server's view of file progress
curl http://localhost:19777/api/sync
Development
Run tests
# All 296 tests
uv run pytest tests/ -v
# Single file
uv run pytest tests/test_transforms.py
# With coverage
uv run pytest tests/ --cov=qc_trace --cov-report=html
Project structure
qc_trace/
schemas/ # Source schemas + transforms → NormalizedMessage
unified.py # The central normalized schema
claude_code/ # Claude Code JSONL parser
codex_cli/ # Codex CLI JSONL parser
gemini_cli/ # Gemini CLI JSON parser
cursor/ # Cursor IDE transcript parser
db/
schema.sql # PostgreSQL schema (sessions, messages, tool_calls, file_progress)
migrations.py # Incremental schema migrations (v1 → v5)
connection.py # Async connection pool (psycopg3)
writer.py # Batch COPY writer with duplicate handling
reader.py # Read queries for the dashboard API
server/
app.py # HTTP server (:19777) — ingest + read API
handlers.py # Request handlers (ingest, sessions, file-progress, stats, feed)
batch.py # Batch accumulator (flush on 100 msgs or 5s)
auth.py # API key authentication + CORS config
daemon/
watcher.py # File discovery via glob patterns
collector.py # Source-specific collectors with incremental processing
pusher.py # HTTP POST with retry queue + exponential backoff
state.py # Atomic state persistence
main.py # Poll-collect-push loop + server reconciliation + auto-update
config.py # Daemon configuration (org, globs, retry settings)
push_status.py # Per-source push timestamps for CLI status
cli/
traced.py # CLI: start, stop, status, logs, db init
dashboard/ # Vite + React + TypeScript + Tailwind
tests/ # 296 tests
docs/ # Deployment guide, review docs
docker-compose.yml # PostgreSQL 16
API Endpoints
| Method | Path | Auth | Description |
|---|---|---|---|
| GET | /health |
Public | Health check + DB connectivity |
| GET | /api/latest-version |
Public | Latest daemon version |
| POST | /ingest |
Push / Admin | Accept NormalizedMessage JSON array |
| POST | /sessions |
Push / Admin | Upsert a session record |
| POST | /api/file-progress |
Push / Admin | Report daemon file read position |
| GET | /api/sync |
Push / Admin | File sync state for daemon reconciliation |
| GET | /api/stats |
Admin | Aggregate stats (sessions, messages, tokens, by source/type). ?org= |
| GET | /api/sessions |
Admin | Session list. ?source=, ?id=, ?org=, ?limit=, ?offset= |
| GET | /api/messages |
Admin | Messages for a session. ?session_id= required |
| GET | /api/feed |
Admin | Latest messages across all sessions. ?since=, ?org=, ?limit= |
Auth is two-tier: push keys can write data (for daemons), admin keys can read + write (for dashboard/API). Auth is disabled when no keys are configured.
Adding a New CLI Source
- Create
qc_trace/schemas/{tool_name}/v1.pywith frozen TypedDict schemas - Create
qc_trace/schemas/{tool_name}/transform.pyreturninglist[NormalizedMessage] - Add glob pattern to
qc_trace/daemon/config.py - Add collector logic to
qc_trace/daemon/collector.py - Add test fixtures in
tests/fixtures/and tests intests/
Production Deployment
See docs/deployment.md for the full production deployment guide, including:
- Environment variable reference
- Authentication setup (API key)
- Database configuration and connection pooling
- Server limits and tuning
- Daemon configuration reference
- macOS (launchd) and Linux (systemd) service installation
- Production checklist
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qc_trace-0.4.53.tar.gz.
File metadata
- Download URL: qc_trace-0.4.53.tar.gz
- Upload date:
- Size: 339.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b07e6b0d884c075d7f98c5033b6141a6b436e7eb9aba995a407e431862a6fd09
|
|
| MD5 |
75d91b738db3b8c41abeedd05b61a44e
|
|
| BLAKE2b-256 |
1e702f8bcd250448b986115440755035612bbbd9ae71668032d0617fa482183d
|
File details
Details for the file qc_trace-0.4.53-py3-none-any.whl.
File metadata
- Download URL: qc_trace-0.4.53-py3-none-any.whl
- Upload date:
- Size: 113.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6962928306b2fccfa1dc7b3c7823bf7def0f29bba569fe36ba2a67b1408579ea
|
|
| MD5 |
6a17141ceb4fb168bfb7acb642ea6a6d
|
|
| BLAKE2b-256 |
eafb4745785fb3a9ea9ec2494243bee06c6503a3f3a21022a2b836ed75a3951d
|