Skip to main content

An MCP server for PostgreSQL AI-powered performance tuning with HypoPG support

Project description

PostgreSQL Performance Tuning MCP

PyPI - Version PyPI - Downloads Python 3.10+ Pepy Total Downloads Docker Pulls

A Model Context Protocol (MCP) server that provides AI-powered PostgreSQL performance tuning capabilities. This server helps identify slow queries, recommend optimal indexes, analyze execution plans, and leverage HypoPG for hypothetical index testing.

Features

Query Analysis

  • Retrieve slow queries from pg_stat_statements with detailed statistics
  • Analyze query execution plans with EXPLAIN and EXPLAIN ANALYZE
  • Identify performance bottlenecks with automated plan analysis
  • Monitor active queries and detect long-running transactions

Index Tuning

  • AI-powered index recommendations based on query workload analysis
  • Hypothetical index testing with HypoPG extension (no disk usage)
  • Find unused and duplicate indexes for cleanup
  • Estimate index sizes before creation
  • Test query plans with proposed indexes before implementing

Database Health

  • Comprehensive health scoring with multiple checks
  • Connection utilization monitoring
  • Cache hit ratio analysis (buffer and index)
  • Lock contention detection
  • Vacuum health and transaction ID wraparound monitoring
  • Replication lag monitoring
  • Background writer and checkpoint analysis

Configuration Analysis

  • Review PostgreSQL settings by category
  • Get recommendations for memory, checkpoint, WAL, autovacuum, and connection settings
  • Identify suboptimal configurations

MCP Prompts & Resources

  • Pre-defined prompt templates for common tuning workflows
  • Dynamic resources for table stats, index info, and health checks
  • Comprehensive documentation resources

Installation

Standard Installation (for MCP clients like Claude Desktop)

pip install pgtuner_mcp

Or using uv:

uv pip install pgtuner_mcp

Manual Installation

git clone https://github.com/isdaniel/pgtuner_mcp.git
cd pgtuner_mcp
pip install -e .

Configuration

Environment Variables

Variable Description Required
DATABASE_URI PostgreSQL connection string Yes

Connection String Format: postgresql://user:password@host:port/database

MCP Client Configuration

Add to your cline_mcp_settings.json or Claude Desktop config:

{
  "mcpServers": {
    "pgtuner_mcp": {
      "command": "python",
      "args": ["-m", "pgtuner_mcp"],
      "env": {
        "DATABASE_URI": "postgresql://user:password@localhost:5432/mydb"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}

Or Streamable HTTP Mode

{
  "mcpServers": {
    "pgtuner_mcp": {
      "type": "http",
      "url": "http://localhost:8080/mcp"
    }
  }
}

Server Modes

1. Standard MCP Mode (Default)

# Default mode (stdio)
python -m pgtuner_mcp

# Explicitly specify stdio mode
python -m pgtuner_mcp --mode stdio

2. HTTP SSE Mode (Legacy Web Applications)

# Start SSE server on default host/port (0.0.0.0:8080)
python -m pgtuner_mcp --mode sse

# Specify custom host and port
python -m pgtuner_mcp --mode sse --host localhost --port 3000

# Enable debug mode
python -m pgtuner_mcp --mode sse --debug

3. Streamable HTTP Mode (Modern MCP Protocol - Recommended)

The streamable-http mode implements the modern MCP Streamable HTTP protocol with a single /mcp endpoint. It supports both stateful (session-based) and stateless modes.

# Start Streamable HTTP server in stateful mode (default)
python -m pgtuner_mcp --mode streamable-http

# Start in stateless mode (fresh transport per request)
python -m pgtuner_mcp --mode streamable-http --stateless

# Specify custom host and port
python -m pgtuner_mcp --mode streamable-http --host localhost --port 8080

# Enable debug mode
python -m pgtuner_mcp --mode streamable-http --debug

Stateful vs Stateless:

  • Stateful (default): Maintains session state across requests using mcp-session-id header. Ideal for long-running interactions.
  • Stateless: Creates a fresh transport for each request with no session tracking. Ideal for serverless deployments or simple request/response patterns.

Endpoint: http://{host}:{port}/mcp

Available Tools

Note: All tools focus exclusively on user/application tables and indexes. System catalog tables (pg_catalog, information_schema, pg_toast) are automatically excluded from all analyses.

Performance Analysis Tools

Tool Description
get_slow_queries Retrieve slow queries from pg_stat_statements with detailed stats (total time, mean time, calls, cache hit ratio). Excludes system catalog queries.
analyze_query Analyze a query's execution plan with EXPLAIN ANALYZE, including automated issue detection
get_table_stats Get detailed table statistics including size, row counts, dead tuples, and access patterns

Index Tuning Tools

Tool Description
get_index_recommendations AI-powered index recommendations based on query workload analysis
explain_with_indexes Run EXPLAIN with hypothetical indexes to test improvements without creating real indexes
manage_hypothetical_indexes Create, list, drop, or reset HypoPG hypothetical indexes. Supports hide/unhide existing indexes.
find_unused_indexes Find unused and duplicate indexes that can be safely dropped

Database Health Tools

Tool Description
check_database_health Comprehensive health check with scoring (connections, cache, locks, replication, wraparound, disk, checkpoints)
get_active_queries Monitor active queries, find long-running transactions and blocked queries. By default excludes system processes.
analyze_wait_events Analyze wait events to identify I/O, lock, or CPU bottlenecks. Focuses on client backend processes.
review_settings Review PostgreSQL settings by category with optimization recommendations

Bloat Detection Tools (pgstattuple)

Tool Description
analyze_table_bloat Analyze table bloat using pgstattuple extension. Shows dead tuple counts, free space, and wasted space percentage.
analyze_index_bloat Analyze B-tree index bloat using pgstatindex. Shows leaf density, fragmentation, and empty/deleted pages. Also supports GIN and Hash indexes.
get_bloat_summary Get a comprehensive overview of database bloat with top bloated tables/indexes, total reclaimable space, and priority maintenance actions.

Tool Parameters

get_slow_queries

  • limit: Maximum queries to return (default: 10)
  • min_calls: Minimum call count filter (default: 1)
  • min_total_time_ms: Minimum total execution time filter
  • order_by: Sort by total_time, mean_time, calls, or rows

analyze_query

  • query (required): SQL query to analyze
  • analyze: Execute query with EXPLAIN ANALYZE (default: true)
  • buffers: Include buffer statistics (default: true)
  • format: Output format - json, text, yaml, xml

get_index_recommendations

  • workload_queries: Optional list of specific queries to analyze
  • max_recommendations: Maximum recommendations (default: 10)
  • min_improvement_percent: Minimum improvement threshold (default: 10%)
  • include_hypothetical_testing: Test with HypoPG (default: true)
  • target_tables: Focus on specific tables

check_database_health

  • include_recommendations: Include actionable recommendations (default: true)
  • verbose: Include detailed statistics (default: false)

analyze_table_bloat

  • table_name: Name of a specific table to analyze (optional)
  • schema_name: Schema name (default: public)
  • use_approx: Use pgstattuple_approx for faster analysis on large tables (default: false)
  • min_table_size_gb: Minimum table size in GB to include in schema-wide scan (default: 5)
  • include_toast: Include TOAST table analysis (default: false)

analyze_index_bloat

  • index_name: Name of a specific index to analyze (optional)
  • table_name: Analyze all indexes on this table (optional)
  • schema_name: Schema name (default: public)
  • min_index_size_gb: Minimum index size in GB to include (default: 5)
  • min_bloat_percent: Only show indexes with bloat above this percentage (default: 20)

get_bloat_summary

  • schema_name: Schema to analyze (default: public)
  • top_n: Number of top bloated objects to show (default: 10)
  • min_size_gb: Minimum object size in GB to include (default: 5)

MCP Prompts

The server includes pre-defined prompt templates for guided tuning sessions:

Prompt Description
diagnose_slow_queries Systematic slow query investigation workflow
index_optimization Comprehensive index analysis and cleanup
health_check Full database health assessment
query_tuning Optimize a specific SQL query
performance_baseline Generate a baseline report for comparison

MCP Resources

Static Resources

  • pgtuner://docs/tools - Complete tool documentation
  • pgtuner://docs/workflows - Common tuning workflows guide
  • pgtuner://docs/prompts - Prompt template documentation

Dynamic Resource Templates

  • pgtuner://table/{schema}/{table_name}/stats - Table statistics
  • pgtuner://table/{schema}/{table_name}/indexes - Table index information
  • pgtuner://query/{query_hash}/stats - Query performance statistics
  • pgtuner://settings/{category} - PostgreSQL settings (memory, checkpoint, wal, autovacuum, connections, all)
  • pgtuner://health/{check_type} - Health checks (connections, cache, locks, replication, bloat, all)

PostgreSQL Extension Setup

HypoPG Extension

HypoPG enables testing indexes without actually creating them. This is extremely useful for:

  • Testing if a proposed index would be used by the query planner
  • Comparing execution plans with different index strategies
  • Estimating storage requirements before committing

Enable HypoPG in Database

HypoPG enables testing hypothetical indexes without creating them on disk.

-- Create the extension
CREATE EXTENSION IF NOT EXISTS hypopg;

-- Verify installation
SELECT * FROM hypopg_list_indexes();

pg_stat_statements Extension

The pg_stat_statements extension is required for query performance analysis. It tracks planning and execution statistics for all SQL statements executed by a server.

Step 1: Enable the Extension in postgresql.conf

Add the following to your postgresql.conf file:

# Required: Load pg_stat_statements module
shared_preload_libraries = 'pg_stat_statements'

# Required: Enable query identifier computation
compute_query_id = on

# Maximum number of statements tracked (default: 5000)
pg_stat_statements.max = 10000

# Track all statements including nested ones (default: top)
# Options: top, all, none
pg_stat_statements.track = top

# Track utility commands like CREATE, ALTER, DROP (default: on)
pg_stat_statements.track_utility = on

Note: After modifying shared_preload_libraries, a PostgreSQL server restart is required.

Step 2: Create the Extension in Your Database

-- Connect to your database and create the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Verify installation
SELECT * FROM pg_stat_statements LIMIT 1;

pgstattuple Extension

The pgstattuple extension is required for bloat detection tools (analyze_table_bloat, analyze_index_bloat, get_bloat_summary). It provides functions to get tuple-level statistics for tables and indexes.

-- Create the extension
CREATE EXTENSION IF NOT EXISTS pgstattuple;

-- Verify installation
SELECT * FROM pgstattuple('pg_class') LIMIT 1;

Performance Impact Considerations

Setting Overhead Recommendation
pg_stat_statements Low (~1-2%) Always enable
track_io_timing Low-Medium (~2-5%) Enable in production, test first
track_functions = all Low Enable for function-heavy workloads
pg_stat_statements.track_planning Medium Enable only when investigating planning issues
log_min_duration_statement Low Recommended for slow query identification

Tip: Use pg_test_timing to measure the timing overhead on your specific system before enabling track_io_timing.

Example Usage

Find and Analyze Slow Queries

# Get top 10 slowest queries
slow_queries = await get_slow_queries(limit=10, order_by="total_time")

# Analyze a specific query's execution plan
analysis = await analyze_query(
    query="SELECT * FROM orders WHERE user_id = 123",
    analyze=True,
    buffers=True
)

Get Index Recommendations

# Analyze workload and get recommendations
recommendations = await get_index_recommendations(
    max_recommendations=5,
    min_improvement_percent=20,
    include_hypothetical_testing=True
)

# Recommendations include CREATE INDEX statements
for rec in recommendations["recommendations"]:
    print(rec["create_statement"])

Database Health Check

# Run comprehensive health check
health = await check_database_health(
    include_recommendations=True,
    verbose=True
)

print(f"Health Score: {health['overall_score']}/100")
print(f"Status: {health['status']}")

# Review specific areas
for issue in health["issues"]:
    print(f"{issue}")

Find Unused Indexes

# Find indexes that can be dropped
unused = await find_unused_indexes(
    schema_name="public",
    include_duplicates=True
)

# Get DROP statements
for stmt in unused["recommendations"]:
    print(stmt)

Docker

docker pull  dog830228/pgtuner_mcp

# Streamable HTTP mode (recommended for web applications)
docker run -p 8080:8080 \
  -e DATABASE_URI=postgresql://user:pass@host:5432/db \
  dog830228/pgtuner_mcp --mode streamable-http

# Streamable HTTP stateless mode (for serverless)
docker run -p 8080:8080 \
  -e DATABASE_URI=postgresql://user:pass@host:5432/db \
  dog830228/pgtuner_mcp --mode streamable-http --stateless

# SSE mode (legacy web applications)
docker run -p 8080:8080 \
  -e DATABASE_URI=postgresql://user:pass@host:5432/db \
  dog830228/pgtuner_mcp --mode sse

# stdio mode (for MCP clients like Claude Desktop)
docker run -i \
  -e DATABASE_URI=postgresql://user:pass@host:5432/db \
  dog830228/pgtuner_mcp --mode stdio

Requirements

  • Python: 3.10+
  • PostgreSQL: 12+ (recommended: 14+)
  • Extensions:
    • pg_stat_statements (required for query analysis)
    • hypopg (optional, for hypothetical index testing)

Dependencies

Core dependencies:

  • mcp[cli]>=1.12.0 - Model Context Protocol SDK
  • psycopg[binary,pool]>=3.1.0 - PostgreSQL adapter with connection pooling
  • pglast>=7.10 - PostgreSQL query parser

Optional (for HTTP modes):

  • starlette>=0.27.0 - ASGI framework
  • uvicorn>=0.23.0 - ASGI server

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pgtuner_mcp-0.3.0.tar.gz (78.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pgtuner_mcp-0.3.0-py3-none-any.whl (72.7 kB view details)

Uploaded Python 3

File details

Details for the file pgtuner_mcp-0.3.0.tar.gz.

File metadata

  • Download URL: pgtuner_mcp-0.3.0.tar.gz
  • Upload date:
  • Size: 78.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pgtuner_mcp-0.3.0.tar.gz
Algorithm Hash digest
SHA256 8dff15d653c1ab75033c5ce77fe0d6bb0963402cd83cefbd36dfb2ace084ba01
MD5 f06a1122da5d23a2241a86ae95d7dae3
BLAKE2b-256 1807bb3b243bf0c64f8482ee4302270a08fd5749b28dc4063a60ca6f3d72c6d0

See more details on using hashes here.

File details

Details for the file pgtuner_mcp-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: pgtuner_mcp-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 72.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pgtuner_mcp-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a77795d41094b25246c4ffead4ae683bdac4a95a05ec7841952b7b8a338341f6
MD5 eafdcfa61ebe61fa10ff7f174d8c38d8
BLAKE2b-256 b97d7edb10b364bcc8921311743eda8b319a1adaffecb8b3422db579cc03da10

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page