An MCP server for PostgreSQL AI-powered performance tuning with HypoPG support
Project description
PostgreSQL Performance Tuning MCP
A Model Context Protocol (MCP) server that provides AI-powered PostgreSQL performance tuning capabilities. This server helps identify slow queries, recommend optimal indexes, analyze execution plans, and leverage HypoPG for hypothetical index testing.
Features
Query Analysis
- Retrieve slow queries from
pg_stat_statementswith detailed statistics - Analyze query execution plans with
EXPLAINandEXPLAIN ANALYZE - Identify performance bottlenecks with automated plan analysis
- Monitor active queries and detect long-running transactions
Index Tuning
- AI-powered index recommendations based on query workload analysis
- Hypothetical index testing with HypoPG extension (no disk usage)
- Find unused and duplicate indexes for cleanup
- Estimate index sizes before creation
- Test query plans with proposed indexes before implementing
Database Health
- Comprehensive health scoring with multiple checks
- Connection utilization monitoring
- Cache hit ratio analysis (buffer and index)
- Lock contention detection
- Vacuum health and transaction ID wraparound monitoring
- Replication lag monitoring
- Background writer and checkpoint analysis
Configuration Analysis
- Review PostgreSQL settings by category
- Get recommendations for memory, checkpoint, WAL, autovacuum, and connection settings
- Identify suboptimal configurations
MCP Prompts & Resources
- Pre-defined prompt templates for common tuning workflows
- Dynamic resources for table stats, index info, and health checks
- Comprehensive documentation resources
Installation
Standard Installation (for MCP clients like Claude Desktop)
pip install pgtuner_mcp
Or using uv:
uv pip install pgtuner_mcp
Manual Installation
git clone https://github.com/isdaniel/pgtuner_mcp.git
cd pgtuner_mcp
pip install -e .
Configuration
Environment Variables
| Variable | Description | Required |
|---|---|---|
DATABASE_URI |
PostgreSQL connection string | Yes |
PGTUNER_EXCLUDE_USERIDS |
Comma-separated list of user IDs (OIDs) to exclude from monitoring | No |
Connection String Format: postgresql://user:password@host:port/database
Excluding Specific Users from Monitoring
You can exclude specific PostgreSQL users from being included in query analysis and monitoring results. This is useful for filtering out:
- Monitoring or replication users
- System accounts
- Internal application service accounts
Set the PGTUNER_EXCLUDE_USERIDS environment variable with a comma-separated list of user OIDs:
# Exclude user IDs 16384, 16385, and 16386
export PGTUNER_EXCLUDE_USERIDS="16384,16385,16386"
To find the OID for a specific PostgreSQL user:
SELECT usesysid, usename FROM pg_user WHERE usename = 'monitoring_user';
When configured, the following queries are filtered:
pg_stat_activityqueries (filters onusesysidcolumn)pg_stat_statementsqueries (filters onuseridcolumn)
This affects tools like get_slow_queries, get_active_queries, analyze_wait_events, check_database_health, and get_index_recommendations.
MCP Client Configuration
Add to your cline_mcp_settings.json or Claude Desktop config:
{
"mcpServers": {
"pgtuner_mcp": {
"command": "python",
"args": ["-m", "pgtuner_mcp"],
"env": {
"DATABASE_URI": "postgresql://user:password@localhost:5432/mydb"
},
"disabled": false,
"autoApprove": []
}
}
}
Or Streamable HTTP Mode
{
"mcpServers": {
"pgtuner_mcp": {
"type": "http",
"url": "http://localhost:8080/mcp"
}
}
}
Server Modes
1. Standard MCP Mode (Default)
# Default mode (stdio)
python -m pgtuner_mcp
# Explicitly specify stdio mode
python -m pgtuner_mcp --mode stdio
2. HTTP SSE Mode (Legacy Web Applications)
The SSE (Server-Sent Events) mode provides a web-based transport for MCP communication. It's useful for web applications and clients that need HTTP-based communication.
# Start SSE server on default host/port (0.0.0.0:8080)
python -m pgtuner_mcp --mode sse
# Specify custom host and port
python -m pgtuner_mcp --mode sse --host localhost --port 3000
# Enable debug mode
python -m pgtuner_mcp --mode sse --debug
SSE Endpoints:
| Endpoint | Method | Description |
|---|---|---|
/sse |
GET | SSE connection endpoint - clients connect here to receive server events |
/messages |
POST | Send messages/requests to the server |
MCP Client Configuration for SSE:
For MCP clients that support SSE transport (like Claude Desktop or custom clients):
{
"mcpServers": {
"pgtuner_mcp": {
"type": "sse",
"url": "http://localhost:8080/sse"
}
}
}
3. Streamable HTTP Mode (Modern MCP Protocol - Recommended)
The streamable-http mode implements the modern MCP Streamable HTTP protocol with a single /mcp endpoint. It supports both stateful (session-based) and stateless modes.
# Start Streamable HTTP server in stateful mode (default)
python -m pgtuner_mcp --mode streamable-http
# Start in stateless mode (fresh transport per request)
python -m pgtuner_mcp --mode streamable-http --stateless
# Specify custom host and port
python -m pgtuner_mcp --mode streamable-http --host localhost --port 8080
# Enable debug mode
python -m pgtuner_mcp --mode streamable-http --debug
Stateful vs Stateless:
- Stateful (default): Maintains session state across requests using
mcp-session-idheader. Ideal for long-running interactions. - Stateless: Creates a fresh transport for each request with no session tracking. Ideal for serverless deployments or simple request/response patterns.
Endpoint: http://{host}:{port}/mcp
Available Tools
Note: All tools focus exclusively on user/application tables and indexes. System catalog tables (
pg_catalog,information_schema,pg_toast) are automatically excluded from all analyses.
Performance Analysis Tools
| Tool | Description |
|---|---|
get_slow_queries |
Retrieve slow queries from pg_stat_statements with detailed stats (total time, mean time, calls, cache hit ratio). Excludes system catalog queries. |
analyze_query |
Analyze a query's execution plan with EXPLAIN ANALYZE, including automated issue detection |
get_table_stats |
Get detailed table statistics including size, row counts, dead tuples, and access patterns |
Index Tuning Tools
| Tool | Description |
|---|---|
get_index_recommendations |
AI-powered index recommendations based on query workload analysis |
explain_with_indexes |
Run EXPLAIN with hypothetical indexes to test improvements without creating real indexes |
manage_hypothetical_indexes |
Create, list, drop, or reset HypoPG hypothetical indexes. Supports hide/unhide existing indexes. |
find_unused_indexes |
Find unused and duplicate indexes that can be safely dropped |
Database Health Tools
| Tool | Description |
|---|---|
check_database_health |
Comprehensive health check with scoring (connections, cache, locks, replication, wraparound, disk, checkpoints) |
get_active_queries |
Monitor active queries, find long-running transactions and blocked queries. By default excludes system processes. |
analyze_wait_events |
Analyze wait events to identify I/O, lock, or CPU bottlenecks. Focuses on client backend processes. |
review_settings |
Review PostgreSQL settings by category with optimization recommendations |
Bloat Detection Tools (pgstattuple)
| Tool | Description |
|---|---|
analyze_table_bloat |
Analyze table bloat using pgstattuple extension. Shows dead tuple counts, free space, and wasted space percentage. |
analyze_index_bloat |
Analyze B-tree index bloat using pgstatindex. Shows leaf density, fragmentation, and empty/deleted pages. Also supports GIN and Hash indexes. |
get_bloat_summary |
Get a comprehensive overview of database bloat with top bloated tables/indexes, total reclaimable space, and priority maintenance actions. |
Tool Parameters
get_slow_queries
limit: Maximum queries to return (default: 10)min_calls: Minimum call count filter (default: 1)min_mean_time_ms: Minimum mean (average) execution time in milliseconds filterorder_by: Sort bymean_time,calls, orrows
analyze_query
query(required): SQL query to analyzeanalyze: Execute query with EXPLAIN ANALYZE (default: true)buffers: Include buffer statistics (default: true)format: Output format -json,text,yaml,xml
get_index_recommendations
workload_queries: Optional list of specific queries to analyzemax_recommendations: Maximum recommendations (default: 10)min_improvement_percent: Minimum improvement threshold (default: 10%)include_hypothetical_testing: Test with HypoPG (default: true)target_tables: Focus on specific tables
check_database_health
include_recommendations: Include actionable recommendations (default: true)verbose: Include detailed statistics (default: false)
analyze_table_bloat
table_name: Name of a specific table to analyze (optional)schema_name: Schema name (default:public)use_approx: Usepgstattuple_approxfor faster analysis on large tables (default: false)min_table_size_gb: Minimum table size in GB to include in schema-wide scan (default: 5)include_toast: Include TOAST table analysis (default: false)
analyze_index_bloat
index_name: Name of a specific index to analyze (optional)table_name: Analyze all indexes on this table (optional)schema_name: Schema name (default:public)min_index_size_gb: Minimum index size in GB to include (default: 5)min_bloat_percent: Only show indexes with bloat above this percentage (default: 20)
get_bloat_summary
schema_name: Schema to analyze (default:public)top_n: Number of top bloated objects to show (default: 10)min_size_gb: Minimum object size in GB to include (default: 5)
MCP Prompts
The server includes pre-defined prompt templates for guided tuning sessions:
| Prompt | Description |
|---|---|
diagnose_slow_queries |
Systematic slow query investigation workflow |
index_optimization |
Comprehensive index analysis and cleanup |
health_check |
Full database health assessment |
query_tuning |
Optimize a specific SQL query |
performance_baseline |
Generate a baseline report for comparison |
MCP Resources
Static Resources
pgtuner://docs/tools- Complete tool documentationpgtuner://docs/workflows- Common tuning workflows guidepgtuner://docs/prompts- Prompt template documentation
Dynamic Resource Templates
pgtuner://table/{schema}/{table_name}/stats- Table statisticspgtuner://table/{schema}/{table_name}/indexes- Table index informationpgtuner://query/{query_hash}/stats- Query performance statisticspgtuner://settings/{category}- PostgreSQL settings (memory, checkpoint, wal, autovacuum, connections, all)pgtuner://health/{check_type}- Health checks (connections, cache, locks, replication, bloat, all)
PostgreSQL Extension Setup
HypoPG Extension
HypoPG enables testing indexes without actually creating them. This is extremely useful for:
- Testing if a proposed index would be used by the query planner
- Comparing execution plans with different index strategies
- Estimating storage requirements before committing
Enable HypoPG in Database
HypoPG enables testing hypothetical indexes without creating them on disk.
-- Create the extension
CREATE EXTENSION IF NOT EXISTS hypopg;
-- Verify installation
SELECT * FROM hypopg_list_indexes();
pg_stat_statements Extension
The pg_stat_statements extension is required for query performance analysis. It tracks planning and execution statistics for all SQL statements executed by a server.
Step 1: Enable the Extension in postgresql.conf
Add the following to your postgresql.conf file:
# Required: Load pg_stat_statements module
shared_preload_libraries = 'pg_stat_statements'
# Required: Enable query identifier computation
compute_query_id = on
# Maximum number of statements tracked (default: 5000)
pg_stat_statements.max = 10000
# Track all statements including nested ones (default: top)
# Options: top, all, none
pg_stat_statements.track = top
# Track utility commands like CREATE, ALTER, DROP (default: on)
pg_stat_statements.track_utility = on
Note: After modifying
shared_preload_libraries, a PostgreSQL server restart is required.
Step 2: Create the Extension in Your Database
-- Connect to your database and create the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Verify installation
SELECT * FROM pg_stat_statements LIMIT 1;
pgstattuple Extension
The pgstattuple extension is required for bloat detection tools (analyze_table_bloat, analyze_index_bloat, get_bloat_summary). It provides functions to get tuple-level statistics for tables and indexes.
-- Create the extension
CREATE EXTENSION IF NOT EXISTS pgstattuple;
-- Verify installation
SELECT * FROM pgstattuple('pg_class') LIMIT 1;
Performance Impact Considerations
| Setting | Overhead | Recommendation |
|---|---|---|
pg_stat_statements |
Low (~1-2%) | Always enable |
track_io_timing |
Low-Medium (~2-5%) | Enable in production, test first |
track_functions = all |
Low | Enable for function-heavy workloads |
pg_stat_statements.track_planning |
Medium | Enable only when investigating planning issues |
log_min_duration_statement |
Low | Recommended for slow query identification |
Tip: Use
pg_test_timingto measure the timing overhead on your specific system before enablingtrack_io_timing.
Example Usage
Find and Analyze Slow Queries
# Get top 10 slowest queries
slow_queries = await get_slow_queries(limit=10, order_by="total_time")
# Analyze a specific query's execution plan
analysis = await analyze_query(
query="SELECT * FROM orders WHERE user_id = 123",
analyze=True,
buffers=True
)
Get Index Recommendations
# Analyze workload and get recommendations
recommendations = await get_index_recommendations(
max_recommendations=5,
min_improvement_percent=20,
include_hypothetical_testing=True
)
# Recommendations include CREATE INDEX statements
for rec in recommendations["recommendations"]:
print(rec["create_statement"])
Database Health Check
# Run comprehensive health check
health = await check_database_health(
include_recommendations=True,
verbose=True
)
print(f"Health Score: {health['overall_score']}/100")
print(f"Status: {health['status']}")
# Review specific areas
for issue in health["issues"]:
print(f"{issue}")
Find Unused Indexes
# Find indexes that can be dropped
unused = await find_unused_indexes(
schema_name="public",
include_duplicates=True
)
# Get DROP statements
for stmt in unused["recommendations"]:
print(stmt)
Docker
docker pull dog830228/pgtuner_mcp
# Streamable HTTP mode (recommended for web applications)
docker run -p 8080:8080 \
-e DATABASE_URI=postgresql://user:pass@host:5432/db \
dog830228/pgtuner_mcp --mode streamable-http
# Streamable HTTP stateless mode (for serverless)
docker run -p 8080:8080 \
-e DATABASE_URI=postgresql://user:pass@host:5432/db \
dog830228/pgtuner_mcp --mode streamable-http --stateless
# SSE mode (legacy web applications)
docker run -p 8080:8080 \
-e DATABASE_URI=postgresql://user:pass@host:5432/db \
dog830228/pgtuner_mcp --mode sse
# stdio mode (for MCP clients like Claude Desktop)
docker run -i \
-e DATABASE_URI=postgresql://user:pass@host:5432/db \
dog830228/pgtuner_mcp --mode stdio
Requirements
- Python: 3.10+
- PostgreSQL: 12+ (recommended: 14+)
- Extensions:
pg_stat_statements(required for query analysis)hypopg(optional, for hypothetical index testing)
Dependencies
Core dependencies:
mcp[cli]>=1.12.0- Model Context Protocol SDKpsycopg[binary,pool]>=3.1.0- PostgreSQL adapter with connection poolingpglast>=7.10- PostgreSQL query parser
Optional (for HTTP modes):
starlette>=0.27.0- ASGI frameworkuvicorn>=0.23.0- ASGI server
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pgtuner_mcp-0.4.0.tar.gz.
File metadata
- Download URL: pgtuner_mcp-0.4.0.tar.gz
- Upload date:
- Size: 82.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0913b241aef731a353ebe7af10279f4c8fd6d293807dd672152c5e6a3a92fd56
|
|
| MD5 |
e97bca5d5cd18a284cac98f73f146e9b
|
|
| BLAKE2b-256 |
8aa0d02ac79367737be3b59578ebde4093cbc330d7e1e380513cd427846c5eee
|
File details
Details for the file pgtuner_mcp-0.4.0-py3-none-any.whl.
File metadata
- Download URL: pgtuner_mcp-0.4.0-py3-none-any.whl
- Upload date:
- Size: 75.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bc7995268956f3cb646ba1bc3b6b2e0e18e053f58313ac0b797236caafa938f9
|
|
| MD5 |
f7e4f7a238a24dc98d7c457bfbb78d15
|
|
| BLAKE2b-256 |
2b130b68398b755ec497514fafd0098899edb800ca45f0683f92dd191a2cf223
|