Local-first Copilot usage analytics — token tracking, premium estimates, and dashboard
Project description
Copilot Usage Analytics
A local-first analytics dashboard that parses your VS Code Copilot chat session data and visualises token usage, premium request estimates, and model distribution — all without sending any data externally.
Why I Built This
I built this tool to make GitHub Copilot usage more transparent and actionable in real development work. Beyond showing token usage over time, it helps estimate and allocate AI costs at repository level — something that is difficult today when developers work across multiple systems in parallel. By turning persisted Copilot session data into a timeline tied to projects, the tool makes it easier to understand usage patterns, compare workflows, and assign clearer estimated costs to each repo.
Features
- Incremental scanning — Only parses new or changed JSONL files on each run
- DuckDB storage — Fast local analytical database, no server required
- Premium request estimation — Calculates costs based on GitHub's model multiplier table
- Multi-page dashboard — Overview with KPI cards and charts, plus a detailed Explorer page with search & filters
- Per-workspace breakdown — See which projects use the most tokens
- Model distribution — Visualise usage across GPT-4o, Claude, Gemini, and other models
- Badge export — Shields.io-compatible JSON badges for each workspace
- Cross-platform — Windows, macOS, and Linux
Quick Start
Install
pip install -e .
Run
# Scan data and launch dashboard (default)
copilot-usage
# Scan only (no dashboard)
copilot-usage analyze
# Dashboard only (skip scan)
copilot-usage dashboard
The dashboard opens automatically at http://127.0.0.1:8050.
CLI Options
| Flag | Description |
|---|---|
--port PORT |
Dashboard port (default: 8050) |
--no-browser |
Don't auto-open browser |
-v, --verbose |
Enable debug logging |
CLI & Terminal Dashboard
The tool ships with an interactive CLI powered by Rich and InquirerPy. When launched without arguments, you get an arrow-key menu to scan, launch the web dashboard, open the terminal dashboard, or adjust settings — all without leaving the terminal.
A full terminal UI dashboard built with Textual shows KPIs, model breakdown, and workspace stats directly in the console. Press S to trigger a scan, R to refresh, and Q to quit.
# Interactive mode (arrow-key menu)
copilot-usage
# Launch terminal dashboard directly
copilot-usage tui
Web Dashboard
The web dashboard is a multi-page Plotly Dash application served locally. It provides interactive charts, filterable tables, and real-time pipeline controls — all rendered in the browser with no external dependencies or data leaving your machine.
Overview
The main page shows at-a-glance KPI cards, a daily token timeline chart, model distribution pie chart, and summary tables for workspaces and sessions.
Explorer
A dedicated search & filter page where you can:
- Search by session ID, workspace, or model name
- Filter by date range, workspace, model, and minimum token count
- Sort results by any column
- Browse the full event-level detail
Pipeline
Run the data ingestion pipeline directly from the dashboard with a real-time console output.
Badges
Generate Shields.io-compatible JSON badges for your workspaces.
Settings
Manage appearance (dark/light theme toggle), view system info, and erase the database.
How It Works
- Discovery — Finds all
chatSessions/*.jsonlfiles in VS Code workspace storage - Parsing — Extracts session anchors, request metadata, and token counts from JSONL events
- Ingestion — Writes structured events to a local DuckDB database with premium cost estimates
- Aggregation — Pre-computes daily, per-session, and per-workspace summaries
- Dashboard — Plotly Dash serves interactive charts and tables from the local database
Data Source
The tool reads from VS Code's local storage:
| Platform | Path |
|---|---|
| Windows | %APPDATA%\Code\User\workspaceStorage\ |
| macOS | ~/Library/Application Support/Code/User/workspaceStorage/ |
| Linux | ~/.config/Code/User/workspaceStorage/ |
No data leaves your machine. Everything is processed and stored locally.
Database
DuckDB database is stored at:
| Platform | Path |
|---|---|
| Windows | %LOCALAPPDATA%\copilot-usage\copilot_usage.duckdb |
| macOS | ~/Library/Application Support/copilot-usage/copilot_usage.duckdb |
| Linux | ~/.local/share/copilot-usage/copilot_usage.duckdb |
Model Multipliers
Premium request estimates use GitHub's published multiplier table:
| Model | Multiplier |
|---|---|
| GPT-4.1, GPT-4o, Claude Sonnet 4, Gemini 2.5 Flash | 0× (included) |
| O4-mini, Gemini 2.5 Pro, Claude Sonnet 4 Thinking | 1× |
| Claude Opus 4.6, O3 | 3× |
Project Structure
src/copilot_usage/
├── __main__.py # CLI entrypoint (interactive + classic)
├── config.py # Paths, model multipliers
├── db.py # DuckDB schema & connection
├── discovery.py # JSONL file discovery
├── parser.py # JSONL parsing
├── ingest.py # Event ingestion
├── aggregator.py # Pre-aggregation
├── pipeline.py # Scan orchestrator
├── badges.py # Shields.io badge export
├── logging.py # Loguru logging config
├── tui.py # Textual terminal dashboard
└── dashboard/
├── app.py # Dash multi-page app
├── assets/ # CSS & favicon
├── pages/
│ ├── overview.py # KPI + charts page
│ ├── explorer.py # Search & filter page
│ ├── pipeline.py # Pipeline runner page
│ ├── badges.py # Badge generator page
│ └── settings.py # Settings, logs & DB management
└── queries.py # DB queries
Limitations
Token counts shown by this tool are estimates and may not match GitHub's official billing figures. Key reasons:
| Factor | Impact |
|---|---|
| No official token API | GitHub does not expose per-request token counts through any public API. This tool relies on metadata embedded in VS Code's local JSONL session files, which is not guaranteed to be complete or stable. |
| Missing token fields | Some JSONL result entries lack promptTokens / outputTokens entirely (e.g. cancelled requests, certain agent-mode responses). When missing, the tool falls back to estimating from the raw text using tiktoken's cl100k_base encoding — or a ~4 chars/token heuristic if tiktoken is unavailable. |
| Tokenizer mismatch | Different models use different tokenizers. The tool always uses cl100k_base (GPT-4 family) for estimation, which may over- or under-count for Claude, Gemini, or newer OpenAI models. |
| System prompt & context not visible | VS Code injects system prompts, file context, and retrieval-augmented content before sending to the model. These hidden tokens are counted by GitHub but are not recorded in the local session files, so prompt token counts are typically lower than actual. |
| Tool-call overhead | Agentic requests with multiple tool-call rounds accumulate tokens across rounds. The JSONL files report the final totals, but intermediate round data may be incomplete. |
| Premium multiplier estimates | The cost multiplier table is a manually maintained snapshot. If GitHub changes pricing or introduces new models, estimates may be stale until the table is updated. |
| Legacy JSON files | Older VS Code versions stored sessions as plain JSON instead of JSONL. Token counts are always estimated from text content for these files. |
Bottom line: Use the numbers for relative comparisons and trend analysis across your projects — not as an exact billing reconciliation.
Requirements
- Python ≥ 3.10
- VS Code with GitHub Copilot Chat extension
License
Apache 2.0 — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file copilot_usage-0.1.1.tar.gz.
File metadata
- Download URL: copilot_usage-0.1.1.tar.gz
- Upload date:
- Size: 1.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fda10cbdc2300491fee27c257cfda1351239d0f3e9d1beef67262f1f5c66d6dd
|
|
| MD5 |
f925ea644bd0f6e93dd3b365cf97333b
|
|
| BLAKE2b-256 |
e5bda9ef6e8d229ac8378adc927794ea63fdd86cfbf594f8e7a389d9fe0e58c7
|
File details
Details for the file copilot_usage-0.1.1-py3-none-any.whl.
File metadata
- Download URL: copilot_usage-0.1.1-py3-none-any.whl
- Upload date:
- Size: 73.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e0df71564984da15161822ff2d057c3fad7894b2fe74089d5c4498032a03a29f
|
|
| MD5 |
12c20958a456bd7302869268945c19b6
|
|
| BLAKE2b-256 |
0023dc3d3a853838a4a21fcfc9f33afdf4c5d33b51afab2d4475d38b020726c3
|