Skip to main content

MySQL Awesome Stats Collector - Collect and visualize MySQL diagnostics from multiple hosts

Project description

MySQL Awesome Stats Collector (MASC)

MASC Python FastAPI License

A lightweight, self-hosted MySQL diagnostics tool for DevOps teams.

Collect, visualize, and compare MySQL diagnostic data across multiple hosts — without agents, cloud dependencies, or complex setup.

FeaturesQuick StartConfigurationUsageScreenshots


✨ Features

📊 Collect Diagnostics

Run diagnostic commands across multiple MySQL hosts in parallel:

  • SHOW ENGINE INNODB STATUS — Buffer pool, transactions, locks, I/O
  • SHOW GLOBAL STATUS — Server metrics and counters
  • SHOW FULL PROCESSLIST — Active queries and connections
  • SHOW GLOBAL VARIABLES — Configuration values

🔍 Rich Visualization

  • InnoDB Status — Parsed sections with key metrics dashboard (hit rate, dirty pages, transactions)
  • Global Status — Searchable table with human-readable formatting (GB, millions, etc.)
  • Processlist — Filterable, sortable table with query search
  • Config Variables — Important settings with health indicators (🟢🟡🔴)

Compare Jobs

Compare two collection runs side-by-side:

  • Numeric counter diffs (threads, locks, temp tables)
  • Buffer pool comparison (size, used, hit ratio)
  • Processlist summary changes
  • Configuration changes highlighted
  • InnoDB text diff with +/- lines

Scheduled Collections (Crons)

Automate diagnostic collection:

  • Create scheduled jobs at specified intervals (15m, 30m, 1h, 6h, 24h, custom)
  • Select which hosts to include per schedule
  • Pause/resume schedules anytime
  • Run schedules manually on-demand
  • Track run history and next scheduled time

📊 Connection Analysis

Detailed connection breakdown:

  • By User — Connections grouped by MySQL user
  • By IP — Connections grouped by source IP address
  • By IP + User — Combined view with sortable columns
  • Active (Query), Sleeping, Other counts per group
  • Click to filter processlist by user

🎯 DevOps-Friendly

  • No agents — Uses PyMySQL Python package
  • No cloud — 100% self-hosted, runs anywhere
  • No database writes — Read-only MySQL access
  • Job-based — Track collections over time with optional naming
  • Parallel execution — Fast collection across hosts

🚀 Quick Start

Prerequisites

  • Python 3.10+
  • Read-only MySQL user on target hosts

Installation

Option 1: Install from PyPI (Recommended)

# Install the package
pip install mysql-awesome-stats-collector

# Create a project directory
mkdir my-masc-project && cd my-masc-project

# Create hosts configuration
cat > hosts.yaml << 'EOF'
hosts:
  - id: "primary"
    label: "Primary DB"
    host: "your-db-host.example.com"
    port: 3306
    user: "observer"
    password: "your-password"
EOF

# Run the server
masc --port 8000

Option 2: Install from Source

# Clone the repository
git clone https://github.com/k4kratik/mysql-awesome-stats-collector.git
cd mysql-awesome-stats-collector

# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
uv pip install -e .

# Configure your hosts
cp hosts.yaml.example hosts.yaml
# Edit hosts.yaml with your MySQL hosts

# Run the server
masc --host 0.0.0.0 --port 8000

Open http://localhost:8000 in your browser.

CLI Usage

# Start server on default port (8000)
masc

# Start on a custom port
masc --port 9000

# Listen on all interfaces
masc --host 0.0.0.0

# Enable auto-reload for development
masc --reload

# Use a custom hosts file
masc --hosts-file /path/to/hosts.yaml

# Show help
masc --help

# Show version
masc --version

Environment Variables

Variable Description Default
MASC_HOSTS_FILE Path to hosts.yaml ./hosts.yaml
MASC_RUNS_DIR Directory for job outputs ./runs

Running as a Daemon (Production)

For production use on a remote server:

# Using nohup (simple)
nohup masc --host 0.0.0.0 --port 8000 > masc.log 2>&1 &

# Using screen (interactive)
screen -S masc
masc --host 0.0.0.0 --port 8000
# Press Ctrl+A, D to detach
# screen -r masc to reattach

# Using systemd (recommended for production)
# Create /etc/systemd/system/masc.service:
# [Unit]
# Description=MySQL Awesome Stats Collector
# After=network.target
#
# [Service]
# Type=simple
# User=your-user
# WorkingDirectory=/path/to/masc
# ExecStart=/path/to/venv/bin/masc --host 0.0.0.0 --port 8000
# Restart=always
#
# [Install]
# WantedBy=multi-user.target

sudo systemctl daemon-reload
sudo systemctl enable masc
sudo systemctl start masc

⚙️ Configuration

hosts.yaml

Define your MySQL hosts in hosts.yaml:

hosts:
  - id: "primary"
    label: "Production Primary"
    host: "db-primary.example.com"
    port: 3306
    user: "observer"
    password: "your-password"

  - id: "replica-1"
    label: "Read Replica 1"
    host: "db-replica-1.example.com"
    port: 3306
    user: "observer"
    password: "your-password"

  - id: "replica-2"
    label: "Read Replica 2"
    host: "db-replica-2.example.com"
    port: 3306
    user: "observer"
    password: "your-password"

MySQL User Permissions

Create a read-only user for MASC:

-- Create the monitoring user
CREATE USER 'masc_monitor'@'%' IDENTIFIED BY 'secure-password';

-- For SHOW ENGINE INNODB STATUS, SHOW PROCESSLIST, SHOW GLOBAL STATUS/VARIABLES
GRANT PROCESS ON *.* TO 'masc_monitor'@'%';

-- For SHOW REPLICA STATUS / SHOW SLAVE STATUS
GRANT REPLICATION CLIENT ON *.* TO 'masc_monitor'@'%';

-- For reading information_schema tables (hot tables, table sizes)
GRANT SELECT ON information_schema.* TO 'masc_monitor'@'%';

-- For performance_schema access (optional, for hot tables feature)
GRANT SELECT ON performance_schema.* TO 'masc_monitor'@'%';

FLUSH PRIVILEGES;
Privilege Purpose
PROCESS InnoDB status, processlist, global status/variables
REPLICATION CLIENT Replica/slave status
SELECT on information_schema Table stats, hot tables analysis
SELECT on performance_schema Hot tables feature (optional)

⚠️ Security Note: Never use a user with write permissions. MASC only needs read access.


📖 Usage

1. Run a Collection

  1. Go to the Home page
  2. Optionally enter a Job Name (e.g., "Before deployment")
  3. Select one or more hosts
  4. Click Run Collection

The job runs in the background. You'll be redirected to the job detail page.

2. View Results

Each host shows tabs for:

  • Raw Output — Complete command output with copy/download buttons
  • InnoDB Status — Parsed sections with metrics dashboard
  • Global Status — Searchable metrics with charts
  • Processlist — Filterable query list with connection summary
  • Config — Important variables with health indicators
  • Replication — Replica lag and master status
  • Health — InnoDB health analysis (deadlocks, waits)

3. Compare Jobs

  1. Go to Compare in the navigation
  2. Select Job A (baseline) and Job B (after)
  3. Click Compare

See what changed between runs:

  • 🟢 Green = Decrease (usually good)
  • 🔴 Red = Increase (watch out)
  • Changed config values highlighted

📁 Project Structure

mysql-awesome-stats-collector/
├── app/
│   ├── main.py          # FastAPI routes
│   ├── cli.py           # CLI entry point
│   ├── db.py            # SQLite setup
│   ├── models.py        # SQLAlchemy models (Job, JobHost, CronJob)
│   ├── collector.py     # MySQL command execution
│   ├── parser.py        # Output parsing
│   ├── compare.py       # Job comparison logic
│   ├── scheduler.py     # Cron job scheduler
│   ├── utils.py         # Helper functions
│   └── templates/       # Jinja2 HTML templates
├── docs/
│   └── PUBLISHING.md    # PyPI publishing guide
├── runs/                # Job output storage (gitignored)
├── hosts.yaml           # Host configuration (gitignored)
├── hosts.yaml.example   # Example configuration
├── observer.db          # SQLite metadata (gitignored)
├── pyproject.toml       # Dependencies & package config
├── CHANGELOG.txt        # Version history
├── LICENSE              # MIT License
└── README.md

Data Storage

  • SQLite (observer.db) — Job metadata, cron schedules (IDs, timestamps, status)

  • Filesystem (runs/) — All command outputs stored as files:

    runs/job_<uuid>/<host_id>/
    ├── raw.txt              # Full command output
    ├── innodb.txt           # Extracted InnoDB status
    ├── global_status.json   # Parsed key/value pairs
    ├── processlist.json     # Parsed process list
    ├── config_vars.json     # Parsed variables
    └── timing.json          # Per-command timing
    

📸 Screenshots

Homepage

Select hosts and run diagnostics in parallel.

MASC Homepage

Job Detail - Host Overview

Single Host Homepage

Single Host

View all hosts in a job with status and timing.

hosts

Hot Tables

Job Detail

InnoDB Status

Parsed InnoDB sections with key metrics dashboard.

InnoDBStatus InnoDB Status

Global Status

Searchable metrics with charts and human-readable formatting.

Global Status

Processlist

Filterable, sortable active queries with pagination.

Processlist

Config Variables

Important settings with health indicators (🟢🟡🔴).

Config Variables

Replication Status

Replica lag monitoring with master comparison.

Replica Replica

Job Comparison

Compare two runs side-by-side with delta highlighting.

todo


🛠️ Tech Stack

Component Technology
Backend FastAPI
Database SQLite + SQLAlchemy
Scheduler APScheduler
Templates Jinja2
Styling TailwindCSS (CDN)
Charts Chart.js
Interactivity Alpine.js
Package Manager uv

🔒 Security Considerations

  • Passwords are stored in plain text in hosts.yaml — keep this file secure
  • Never commit hosts.yaml to version control (it's gitignored by default)
  • Use a read-only MySQL user with minimal permissions
  • Passwords are passed via MYSQL_PWD environment variable (not command line)
  • No credentials are logged or exposed in the UI

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📋 Roadmap

  • Environment variable support for hosts file
  • Replication monitoring (replica lag, master comparison)
  • PyPI package (pip install mysql-awesome-stats-collector)
  • Scheduled collections (Cron jobs)
  • Buffer pool comparison between jobs
  • Job re-run feature
  • Connection summary by User/IP
  • Hot tables analysis
  • Environment variable support for passwords
  • Export comparison reports (PDF/HTML)
  • Alerting thresholds
  • Query analysis tools
  • Docker support

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


🙏 Acknowledgments

Built for DevOps teams who need quick MySQL diagnostics without the overhead of complex monitoring solutions.


⬆ Back to Top

Made with ❤️ for the MySQL community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mysql_awesome_stats_collector-1.0.26.tar.gz (148.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mysql_awesome_stats_collector-1.0.26-py3-none-any.whl (140.7 kB view details)

Uploaded Python 3

File details

Details for the file mysql_awesome_stats_collector-1.0.26.tar.gz.

File metadata

File hashes

Hashes for mysql_awesome_stats_collector-1.0.26.tar.gz
Algorithm Hash digest
SHA256 6a9037e3aa505a49dc287e9f3df9ea100c9133d77276fa368348a95b01d7961e
MD5 110d1983cf6e73767990e3aaf6ec8583
BLAKE2b-256 80281e5369ac7ac7f6e5accf8db51ef0c4c6968f0f34574cc1347099e1dbb6cb

See more details on using hashes here.

Provenance

The following attestation bundles were made for mysql_awesome_stats_collector-1.0.26.tar.gz:

Publisher: release.yml on k4kratik/mysql-awesome-stats-collector

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mysql_awesome_stats_collector-1.0.26-py3-none-any.whl.

File metadata

File hashes

Hashes for mysql_awesome_stats_collector-1.0.26-py3-none-any.whl
Algorithm Hash digest
SHA256 ac8a8ec21e057094dc407501827afcf7b4dd73086acad6ea5aaef9e26de8cb7a
MD5 7b77502b91708ae28c082148f1bce40b
BLAKE2b-256 81ac49e1fad4821a11ee4411ab0c05e531568ba6f3c990c2fb6609038e8e4314

See more details on using hashes here.

Provenance

The following attestation bundles were made for mysql_awesome_stats_collector-1.0.26-py3-none-any.whl:

Publisher: release.yml on k4kratik/mysql-awesome-stats-collector

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page