MySQL Awesome Stats Collector - Collect and visualize MySQL diagnostics from multiple hosts
Project description
MySQL Awesome Stats Collector (MASC)
A lightweight, self-hosted MySQL diagnostics tool for DevOps teams.
Collect, visualize, and compare MySQL diagnostic data across multiple hosts — without agents, cloud dependencies, or complex setup.
Features • Quick Start • Configuration • Usage • Screenshots
✨ Features
📊 Collect Diagnostics
Run diagnostic commands across multiple MySQL hosts in parallel:
SHOW ENGINE INNODB STATUS— Buffer pool, transactions, locks, I/OSHOW GLOBAL STATUS— Server metrics and countersSHOW FULL PROCESSLIST— Active queries and connectionsSHOW GLOBAL VARIABLES— Configuration values
🔍 Rich Visualization
- InnoDB Status — Parsed sections with key metrics dashboard (hit rate, dirty pages, transactions)
- Global Status — Searchable table with human-readable formatting (GB, millions, etc.)
- Processlist — Filterable, sortable table with query search
- Config Variables — Important settings with health indicators (🟢🟡🔴)
⚡ Compare Jobs
Compare two collection runs side-by-side:
- Numeric counter diffs (threads, locks, temp tables)
- Buffer pool comparison (size, used, hit ratio)
- Processlist summary changes
- Configuration changes highlighted
- InnoDB text diff with +/- lines
⏰ Scheduled Collections (Crons)
Automate diagnostic collection:
- Create scheduled jobs at specified intervals (15m, 30m, 1h, 6h, 24h, custom)
- Select which hosts to include per schedule
- Pause/resume schedules anytime
- Run schedules manually on-demand
- Track run history and next scheduled time
📊 Connection Analysis
Detailed connection breakdown:
- By User — Connections grouped by MySQL user
- By IP — Connections grouped by source IP address
- By IP + User — Combined view with sortable columns
- Active (Query), Sleeping, Other counts per group
- Click to filter processlist by user
🎯 DevOps-Friendly
- No agents — Uses PyMySQL Python package
- No cloud — 100% self-hosted, runs anywhere
- No database writes — Read-only MySQL access
- Job-based — Track collections over time with optional naming
- Parallel execution — Fast collection across hosts
🚀 Quick Start
Prerequisites
- Python 3.10+
- Read-only MySQL user on target hosts
Installation
Option 1: Install from PyPI (Recommended)
# Install the package
pip install mysql-awesome-stats-collector
# Create a project directory
mkdir my-masc-project && cd my-masc-project
# Create hosts configuration
cat > hosts.yaml << 'EOF'
hosts:
- id: "primary"
label: "Primary DB"
host: "your-db-host.example.com"
port: 3306
user: "observer"
password: "your-password"
EOF
# Run the server
masc --port 8000
Option 2: Install from Source
# Clone the repository
git clone https://github.com/k4kratik/mysql-awesome-stats-collector.git
cd mysql-awesome-stats-collector
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e .
# Configure your hosts
cp hosts.yaml.example hosts.yaml
# Edit hosts.yaml with your MySQL hosts
# Run the server
masc --host 0.0.0.0 --port 8000
Open http://localhost:8000 in your browser.
CLI Usage
# Start server on default port (8000)
masc
# Start on a custom port
masc --port 9000
# Listen on all interfaces
masc --host 0.0.0.0
# Enable auto-reload for development
masc --reload
# Use a custom hosts file
masc --hosts-file /path/to/hosts.yaml
# Show help
masc --help
# Show version
masc --version
Environment Variables
| Variable | Description | Default |
|---|---|---|
MASC_HOSTS_FILE |
Path to hosts.yaml | ./hosts.yaml |
MASC_RUNS_DIR |
Directory for job outputs | ./runs |
Running as a Daemon (Production)
For production use on a remote server:
# Using nohup (simple)
nohup masc --host 0.0.0.0 --port 8000 > masc.log 2>&1 &
# Using screen (interactive)
screen -S masc
masc --host 0.0.0.0 --port 8000
# Press Ctrl+A, D to detach
# screen -r masc to reattach
# Using systemd (recommended for production)
# Create /etc/systemd/system/masc.service:
# [Unit]
# Description=MySQL Awesome Stats Collector
# After=network.target
#
# [Service]
# Type=simple
# User=your-user
# WorkingDirectory=/path/to/masc
# ExecStart=/path/to/venv/bin/masc --host 0.0.0.0 --port 8000
# Restart=always
#
# [Install]
# WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable masc
sudo systemctl start masc
⚙️ Configuration
hosts.yaml
Define your MySQL hosts in hosts.yaml:
hosts:
- id: "primary"
label: "Production Primary"
host: "db-primary.example.com"
port: 3306
user: "observer"
password: "your-password"
- id: "replica-1"
label: "Read Replica 1"
host: "db-replica-1.example.com"
port: 3306
user: "observer"
password: "your-password"
- id: "replica-2"
label: "Read Replica 2"
host: "db-replica-2.example.com"
port: 3306
user: "observer"
password: "your-password"
MySQL User Permissions
Create a read-only user for MASC:
-- Create the monitoring user
CREATE USER 'masc_monitor'@'%' IDENTIFIED BY 'secure-password';
-- For SHOW ENGINE INNODB STATUS, SHOW PROCESSLIST, SHOW GLOBAL STATUS/VARIABLES
GRANT PROCESS ON *.* TO 'masc_monitor'@'%';
-- For SHOW REPLICA STATUS / SHOW SLAVE STATUS
GRANT REPLICATION CLIENT ON *.* TO 'masc_monitor'@'%';
-- For reading information_schema tables (hot tables, table sizes)
GRANT SELECT ON information_schema.* TO 'masc_monitor'@'%';
-- For performance_schema access (optional, for hot tables feature)
GRANT SELECT ON performance_schema.* TO 'masc_monitor'@'%';
FLUSH PRIVILEGES;
| Privilege | Purpose |
|---|---|
PROCESS |
InnoDB status, processlist, global status/variables |
REPLICATION CLIENT |
Replica/slave status |
SELECT on information_schema |
Table stats, hot tables analysis |
SELECT on performance_schema |
Hot tables feature (optional) |
⚠️ Security Note: Never use a user with write permissions. MASC only needs read access.
📖 Usage
1. Run a Collection
- Go to the Home page
- Optionally enter a Job Name (e.g., "Before deployment")
- Select one or more hosts
- Click Run Collection
The job runs in the background. You'll be redirected to the job detail page.
2. View Results
Each host shows tabs for:
- Raw Output — Complete command output with copy/download buttons
- InnoDB Status — Parsed sections with metrics dashboard
- Global Status — Searchable metrics with charts
- Processlist — Filterable query list with connection summary
- Config — Important variables with health indicators
- Replication — Replica lag and master status
- Health — InnoDB health analysis (deadlocks, waits)
3. Compare Jobs
- Go to Compare in the navigation
- Select Job A (baseline) and Job B (after)
- Click Compare
See what changed between runs:
- 🟢 Green = Decrease (usually good)
- 🔴 Red = Increase (watch out)
- Changed config values highlighted
📁 Project Structure
mysql-awesome-stats-collector/
├── app/
│ ├── main.py # FastAPI routes
│ ├── cli.py # CLI entry point
│ ├── db.py # SQLite setup
│ ├── models.py # SQLAlchemy models (Job, JobHost, CronJob)
│ ├── collector.py # MySQL command execution
│ ├── parser.py # Output parsing
│ ├── compare.py # Job comparison logic
│ ├── scheduler.py # Cron job scheduler
│ ├── utils.py # Helper functions
│ └── templates/ # Jinja2 HTML templates
├── docs/
│ └── PUBLISHING.md # PyPI publishing guide
├── runs/ # Job output storage (gitignored)
├── hosts.yaml # Host configuration (gitignored)
├── hosts.yaml.example # Example configuration
├── observer.db # SQLite metadata (gitignored)
├── pyproject.toml # Dependencies & package config
├── CHANGELOG.txt # Version history
├── LICENSE # MIT License
└── README.md
Data Storage
-
SQLite (
observer.db) — Job metadata, cron schedules (IDs, timestamps, status) -
Filesystem (
runs/) — All command outputs stored as files:runs/job_<uuid>/<host_id>/ ├── raw.txt # Full command output ├── innodb.txt # Extracted InnoDB status ├── global_status.json # Parsed key/value pairs ├── processlist.json # Parsed process list ├── config_vars.json # Parsed variables └── timing.json # Per-command timing
📸 Screenshots
Homepage
Select hosts and run diagnostics in parallel.
Job Detail - Host Overview
Single Host Homepage
View all hosts in a job with status and timing.
Hot Tables
InnoDB Status
Parsed InnoDB sections with key metrics dashboard.
Global Status
Searchable metrics with charts and human-readable formatting.
Processlist
Filterable, sortable active queries with pagination.
Config Variables
Important settings with health indicators (🟢🟡🔴).
Replication Status
Replica lag monitoring with master comparison.
Job Comparison
Compare two runs side-by-side with delta highlighting.
todo
🛠️ Tech Stack
| Component | Technology |
|---|---|
| Backend | FastAPI |
| Database | SQLite + SQLAlchemy |
| Scheduler | APScheduler |
| Templates | Jinja2 |
| Styling | TailwindCSS (CDN) |
| Charts | Chart.js |
| Interactivity | Alpine.js |
| Package Manager | uv |
🔒 Security Considerations
- Passwords are stored in plain text in
hosts.yaml— keep this file secure - Never commit
hosts.yamlto version control (it's gitignored by default) - Use a read-only MySQL user with minimal permissions
- Passwords are passed via
MYSQL_PWDenvironment variable (not command line) - No credentials are logged or exposed in the UI
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📋 Roadmap
- Environment variable support for hosts file
- Replication monitoring (replica lag, master comparison)
- PyPI package (
pip install mysql-awesome-stats-collector) - Scheduled collections (Cron jobs)
- Buffer pool comparison between jobs
- Job re-run feature
- Connection summary by User/IP
- Hot tables analysis
- Environment variable support for passwords
- Export comparison reports (PDF/HTML)
- Alerting thresholds
- Query analysis tools
- Docker support
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
Built for DevOps teams who need quick MySQL diagnostics without the overhead of complex monitoring solutions.
Made with ❤️ for the MySQL community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mysql_awesome_stats_collector-1.0.26.tar.gz.
File metadata
- Download URL: mysql_awesome_stats_collector-1.0.26.tar.gz
- Upload date:
- Size: 148.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a9037e3aa505a49dc287e9f3df9ea100c9133d77276fa368348a95b01d7961e
|
|
| MD5 |
110d1983cf6e73767990e3aaf6ec8583
|
|
| BLAKE2b-256 |
80281e5369ac7ac7f6e5accf8db51ef0c4c6968f0f34574cc1347099e1dbb6cb
|
Provenance
The following attestation bundles were made for mysql_awesome_stats_collector-1.0.26.tar.gz:
Publisher:
release.yml on k4kratik/mysql-awesome-stats-collector
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mysql_awesome_stats_collector-1.0.26.tar.gz -
Subject digest:
6a9037e3aa505a49dc287e9f3df9ea100c9133d77276fa368348a95b01d7961e - Sigstore transparency entry: 849700056
- Sigstore integration time:
-
Permalink:
k4kratik/mysql-awesome-stats-collector@08c31922a06434c2228b42ff8555f0217d3cb48d -
Branch / Tag:
refs/tags/v1.0.26 - Owner: https://github.com/k4kratik
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@08c31922a06434c2228b42ff8555f0217d3cb48d -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file mysql_awesome_stats_collector-1.0.26-py3-none-any.whl.
File metadata
- Download URL: mysql_awesome_stats_collector-1.0.26-py3-none-any.whl
- Upload date:
- Size: 140.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ac8a8ec21e057094dc407501827afcf7b4dd73086acad6ea5aaef9e26de8cb7a
|
|
| MD5 |
7b77502b91708ae28c082148f1bce40b
|
|
| BLAKE2b-256 |
81ac49e1fad4821a11ee4411ab0c05e531568ba6f3c990c2fb6609038e8e4314
|
Provenance
The following attestation bundles were made for mysql_awesome_stats_collector-1.0.26-py3-none-any.whl:
Publisher:
release.yml on k4kratik/mysql-awesome-stats-collector
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mysql_awesome_stats_collector-1.0.26-py3-none-any.whl -
Subject digest:
ac8a8ec21e057094dc407501827afcf7b4dd73086acad6ea5aaef9e26de8cb7a - Sigstore transparency entry: 849700072
- Sigstore integration time:
-
Permalink:
k4kratik/mysql-awesome-stats-collector@08c31922a06434c2228b42ff8555f0217d3cb48d -
Branch / Tag:
refs/tags/v1.0.26 - Owner: https://github.com/k4kratik
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@08c31922a06434c2228b42ff8555f0217d3cb48d -
Trigger Event:
workflow_dispatch
-
Statement type: