A local-first metrics tool that analyzes how you use AI coding assistants.
Project description
CodeStat · AI Code Metrics
Quantify how much AI actually contributes to your codebase.
CodeStat is a local metrics tool that analyzes how you use AI coding assistants:
how many lines are generated by AI, how many are kept, and how this evolves over time.
中文文档见:
README.zh-CN.md
Features
-
Global dashboard for all data
- AI generated lines, adopted lines, adoption & generation rates
- File count, session count, quick bar chart overview
-
Multi‑dimension queries
- By file: see how much of a file comes from AI and how much you kept
- By session: analyze one coding session with detailed diff lines
- By project: aggregate metrics for an entire repository
-
Agent / model comparison
- Compare multiple sessions (agents / models / settings) side‑by‑side
- See which one actually produces more adopted code instead of just more tokens
-
Local‑first & privacy‑friendly
- All metrics are computed locally from your own diffs
- No source code or prompts are sent to any remote service
-
Nice CLI UX
- Rich‑based tables & colors, arrow‑key navigation
- Minimal but informative header (MCP status + repo info)
Demo
TODO: add real screenshots / GIFs from your terminal
-
Global dashboard
(insert GIF or screenshot here)
-
Session metrics with diff lines
(insert GIF or screenshot here)
Quickstart
Install
git clone https://github.com/2hangchen/CodeStat.git
cd CodeStat
pip install -r requirements.txt
Once published to PyPI you can alternatively run:
pip install codestat-ai
Start the CLI
python .\cli\main.py
Use ↑/↓ to move, Enter to confirm.
Choose “📈 Global Dashboard (All Data)” to see an overview of your local metrics.
Typical Workflows
-
Measure your own AI usage
- Record one or more coding sessions with your IDE + MCP server
- Run
CodeStatand inspect:- AI generated vs adopted lines
- Which files receive the most AI help
-
Compare agents / models / prompts
- Map different sessions to different agents / models
- Use Compare Agents to get a per‑session comparison table
-
Project‑level health check
- For a given repo, run project metrics to see:
- Where AI contributes the most
- Whether AI‑generated code is actually being kept
- For a given repo, run project metrics to see:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aicodestat-0.0.1.tar.gz.
File metadata
- Download URL: aicodestat-0.0.1.tar.gz
- Upload date:
- Size: 35.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c88f5c497777b7f647f680ac6b4b0955960a467649948f0559c7b8d44bf21201
|
|
| MD5 |
01e2b33afdf07e29f6deaa7890d5bc9f
|
|
| BLAKE2b-256 |
6c71afd43d842609a4e7f6c54bc71928246a4c791e77ee2a53c4c4fd98902160
|
File details
Details for the file aicodestat-0.0.1-py3-none-any.whl.
File metadata
- Download URL: aicodestat-0.0.1-py3-none-any.whl
- Upload date:
- Size: 44.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4fe0b80ccf0eedf1e3ec7755242245c1a1fff6ab9cbc3f9cadbb123d1aa01fca
|
|
| MD5 |
3e9d72c40b11d498766d8056a7883fc2
|
|
| BLAKE2b-256 |
423fe982902dac5ebfe28cc2ebe9603e3fd63a3f8c1c61a15c5e8219e05661fa
|