Add your description here
Project description
MCP PostgreSQL Operations Server
A professional MCP server for PostgreSQL database server operations, monitoring, and management. Most features work independently, but advanced performance analysis capabilities are available when the pg_stat_statements and (optionally) pg_stat_monitor extensions are installed.
Features
-
✅ PostgreSQL Monitoring: Performance analysis based on pg_stat_statements and pg_stat_monitor
-
✅ Structure Exploration: Database, table, and user listing
-
✅ Performance Analysis: Slow query identification and index usage analysis
-
✅ Capacity Management: Database and table size analysis
-
✅ Configuration Retrieval: PostgreSQL configuration parameter verification
-
✅ Database Performance Statistics: Comprehensive transaction, I/O, and buffer cache analysis
-
✅ I/O Performance Monitoring: Table and index I/O statistics with buffer hit ratio analysis
-
✅ Background Process Monitoring: Checkpoint and background writer performance analysis
-
✅ Replication Monitoring: Standby server conflict detection and replication lag analysis
-
✅ Function Performance Analysis: User-defined function execution statistics
-
✅ Safe Read-Only: All operations are read-only and safe
-
🛠️ Easy Customization: Simple and clean codebase makes it very easy to add new tools or customize existing ones
Example Usage
Quick start
Note: The
postgresqlcontainer included indocker-compose.ymlis intended for quickstart testing purposes only. You can connect to your own PostgreSQL instance by adjusting the environment variables as needed.
If you want to use your own PostgreSQL instance instead of the built-in test container:
- Update the target PostgreSQL connection information in your
.envfile (see POSTGRES_HOST, POSTGRES_PORT, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB).- In
docker-compose.yml, comment out (disable) thepostgresandpostgres-init-extensionscontainers to avoid starting the built-in test database.
1. Environment Setup
Note: While superuser privileges provide access to all databases and system information, the MCP server also works with regular user permissions for basic monitoring tasks.
### Check and modify .env file
cp .env.example .env
### If you use other postgresql server, configure connection information:
POSTGRES_HOST=your-address
POSTGRES_PORT=your-listen-port
POSTGRES_USER=your-username
POSTGRES_PASSWORD=your-password
POSTGRES_DB=your-database # Default connection DB. Superusers can access all DBs.
2. Install Dependencies
docker-compose up -d
3. Access to OpenWebUI
- The list of MCP tool features provided by
swaggercan be found in the MCPO API Docs URL.- e.g:
http://localhost:8003/docs
- e.g:
4. Registering the Tool in OpenWebUI
- logging in to OpenWebUI with an admin account
- go to "Settings" → "Tools" from the top menu.
- Enter the
postgresql-opsTool address (e.g.,http://localhost:8003/postgresql-ops) to connect MCP Tools. - Setup Ollama or OpenAI.
Usage Examples
Claude Desktop Integration
(Recommended) Add to your Claude Desktop configuration file:
{
"mcpServers": {
"postgresql-ops": {
"command": "uvx",
"args": ["--python", "3.11", "mcp-postgresql-ops"],
"env": {
"POSTGRES_HOST": "127.0.0.1",
"POSTGRES_PORT": "5432",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "passwd",
"POSTGRES_DB": "testdb"
}
}
}
}
(Optional) Run with Local Source:
{
"mcpServers": {
"postgresql-ops": {
"command": "uv",
"args": ["run", "python", "-m", "src.mcp_postgresql_ops.mcp_main"],
"cwd": "/path/to/MCP-PostgreSQL-Ops",
"env": {
"POSTGRES_HOST": "127.0.0.1",
"POSTGRES_PORT": "5432",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "passwd",
"POSTGRES_DB": "testdb"
}
}
}
}
Command Line Usage
/w Pypi and uvx (Recommended)
# Stdio mode
uvx --python 3.11 mcp-postgresql-ops \
--type stdio
# HTTP mode
uvx --python 3.11 mcp-postgresql-ops
--type streamable-http \
--host 127.0.0.1 \
--port 8080 \
--log-level DEBUG
/w Local Source
# Stdio mode
PYTHONPATH=/path/to/MCP-PostgreSQL-Ops
python -m src.mcp_postgresql_ops.mcp_main \
--type stdio
# HTTP mode
PYTHONPATH=/path/to/MCP-PostgreSQL-Ops
python -m src.mcp_postgresql_ops.mcp_main \
--type streamable-http \
--host 127.0.0.1 \
--port 8080 \
--log-level DEBUG
Environment Variables
| Variable | Description | Default | Project Default |
|---|---|---|---|
PYTHONPATH |
Python module search path for MCP server imports | - | /app/src |
MCP_LOG_LEVEL |
Server logging verbosity (DEBUG, INFO, WARNING, ERROR) | INFO |
INFO |
FASTMCP_TYPE |
MCP transport protocol (stdio for CLI, streamable-http for web) | stdio |
streamable-http |
FASTMCP_HOST |
HTTP server bind address (0.0.0.0 for all interfaces) | 127.0.0.1 |
0.0.0.0 |
FASTMCP_PORT |
HTTP server port for MCP communication | 8080 |
8080 |
PGSQL_VERSION |
PostgreSQL major version for Docker image selection | 16 |
16 |
POSTGRES_HOST |
PostgreSQL server hostname or IP address | 127.0.0.1 |
host.docker.internal |
POSTGRES_PORT |
PostgreSQL server port number | 5432 |
15432 |
POSTGRES_USER |
PostgreSQL connection username (needs read permissions) | postgres |
postgres |
POSTGRES_PASSWORD |
PostgreSQL user password (supports special characters) | changeme!@34 |
changeme!@34 |
POSTGRES_DB |
Default database name for connections | testdb |
testdb |
POSTGRES_MAX_CONNECTIONS |
PostgreSQL max_connections configuration parameter | 200 |
200 |
DOCKER_EXTERNAL_PORT_OPENWEBUI |
Host port mapping for Open WebUI container | 8080 |
3003 |
DOCKER_EXTERNAL_PORT_MCP_SERVER |
Host port mapping for MCP server container | 8080 |
18003 |
DOCKER_EXTERNAL_PORT_MCPO_PROXY |
Host port mapping for MCPO proxy container | 8000 |
8003 |
Note: POSTGRES_DB serves as the default target database for operations when no specific database is specified. In Docker environments, if set to a non-default name, this database will be automatically created during initial PostgreSQL startup.
Prerequisites
Required PostgreSQL Extensions
⚠️ Note: Most MCP tools work without any PostgreSQL extensions. However, advanced performance analysis tools require the following extensions:
-- Query performance statistics (required only for get_pg_stat_statements_top_queries)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Advanced monitoring (optional, used by get_pg_stat_monitor_recent_queries)
CREATE EXTENSION IF NOT EXISTS pg_stat_monitor;
Quick Setup: For new PostgreSQL installations, add to postgresql.conf:
shared_preload_libraries = 'pg_stat_statements'
Then restart PostgreSQL and run the CREATE EXTENSION commands above.
pg_stat_statementsis required only for slow query analysis tools.pg_stat_monitoris optional and used for real-time query monitoring.- All other tools work without these extensions.
Minimum Requirements
- PostgreSQL 12+ (tested with PostgreSQL 16)
- Python 3.11
- Network access to PostgreSQL server
- Read permissions on system catalogs
Example Queries
🟢 Extension-Independent Tools (Always Available)
- get_server_info
- "Show PostgreSQL server version and extension status"
- "Check if pg_stat_statements is installed"
- get_active_connections
- "Show all active connections"
- "List current sessions with database and user"
- get_postgresql_config
- "Show all PostgreSQL configuration parameters"
- "Find all memory-related configuration settings"
- get_database_list
- "List all databases and their sizes"
- "Show database list with owner information"
- get_table_list
- "List all tables in the current database"
- "Show table sizes in the public schema"
- get_user_list
- "List all database users and their roles"
- "Show user permissions for a specific database"
- get_index_usage_stats
- "Analyze index usage efficiency"
- "Find unused indexes in the current database"
- get_database_size_info
- "Show database capacity analysis"
- "Find the largest databases by size"
- get_table_size_info
- "Show table and index size analysis"
- "Find largest tables in a specific schema"
- get_vacuum_analyze_stats
- "Show recent VACUUM and ANALYZE operations"
- "List tables needing VACUUM"
- get_lock_monitoring
- "Show all current locks and blocked sessions"
- "Show only blocked sessions with granted=false filter"
- "Monitor locks by specific user with username filter"
- "Check exclusive locks with mode filter"
- get_wal_status
- "Show WAL status and archiving information"
- "Monitor WAL generation and current LSN position"
- get_replication_status
- "Check replication connections and lag status"
- "Monitor replication slots and WAL receiver status"
- get_database_stats
- "Show comprehensive database performance metrics"
- "Analyze transaction commit ratios and I/O statistics"
- "Monitor buffer cache hit ratios and temporary file usage"
- get_bgwriter_stats
- "Analyze checkpoint performance and timing"
- "Show background writer efficiency statistics"
- "Monitor buffer allocation and fsync patterns"
- get_all_tables_stats
- "Show comprehensive statistics for all tables"
- "Include system tables with include_system=true parameter"
- "Analyze table access patterns and maintenance needs"
- get_user_functions_stats
- "Analyze user-defined function performance"
- "Show function call counts and execution times"
- "Identify performance bottlenecks in custom functions"
- get_table_io_stats
- "Analyze table I/O performance and buffer hit ratios"
- "Identify tables with poor buffer cache performance"
- "Monitor TOAST table I/O statistics"
- get_index_io_stats
- "Show index I/O performance and buffer efficiency"
- "Identify indexes causing excessive disk I/O"
- "Monitor index cache-friendliness patterns"
- get_database_conflicts_stats
- "Check replication conflicts on standby servers"
- "Analyze conflict types and resolution statistics"
- "Monitor standby server query cancellation patterns"
- "Monitor WAL generation and current LSN position"
- get_replication_status
- "Check replication connections and lag status"
- "Monitor replication slots and WAL receiver status"
🟡 Extension-Dependent Tools
- get_pg_stat_statements_top_queries (Requires
pg_stat_statements)- "Show top 10 slowest queries"
- "Analyze slow queries in the sales database"
- get_pg_stat_monitor_recent_queries (Optional, uses
pg_stat_monitor)- "Show recent queries in real time"
- "Monitor query activity for the last 5 minutes"
💡 Pro Tip: All tools support multi-database operations using the database_name parameter. This allows PostgreSQL superusers to analyze and monitor multiple databases from a single MCP server instance.
📖 More Useful Example Queries →
Troubleshooting
Connection Issues
- Check PostgreSQL server status
- Verify connection parameters in
.envfile - Ensure network connectivity
- Check user permissions
Extension Errors
- Run
get_server_infoto check extension status - Install missing extensions:
CREATE EXTENSION pg_stat_statements; CREATE EXTENSION pg_stat_monitor;
- Restart PostgreSQL if needed
Performance Issues
- Use
limitparameters to reduce result size - Run monitoring during off-peak hours
- Check database load before running analysis
Development
Testing & Development
# Test with MCP Inspector
./scripts/run-mcp-inspector-local.sh
# Direct execution for debugging
python -m src.mcp_postgresql_ops.mcp_main --log-level DEBUG
# Run tests (if you add any)
uv run pytest
Security Notes
- All tools are read-only - no data modification capabilities
- Sensitive information (passwords) are masked in outputs
- No direct SQL execution - only predefined queries
- Follows principle of least privilege
Contributing
🤝 Got ideas? Found bugs? Want to add cool features?
We're always excited to welcome new contributors! Whether you're fixing a typo, adding a new monitoring tool, or improving documentation - every contribution makes this project better.
Ways to contribute:
- 🐛 Report issues or bugs
- 💡 Suggest new PostgreSQL monitoring features
- 📝 Improve documentation
- 🚀 Submit pull requests
- ⭐ Star the repo if you find it useful!
Pro tip: The codebase is designed to be super friendly for adding new tools. Check out the existing @mcp.tool() functions in mcp_main.py.
Related Projects
Other MCP servers by the same author:
License
Freely use, modify, and distribute under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_postgresql_ops-1.1.0.tar.gz.
File metadata
- Download URL: mcp_postgresql_ops-1.1.0.tar.gz
- Upload date:
- Size: 27.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5b8175fb886dcbb8b498da86d53c6a5096b4ed9c1b52f9370161106709a6c07
|
|
| MD5 |
93c0342ad6e530d7ddb480b799ee453a
|
|
| BLAKE2b-256 |
30aed9314c5d551a59265edcd8a9d26aed4b27e656ef74a3481e4a6270afbd88
|
Provenance
The following attestation bundles were made for mcp_postgresql_ops-1.1.0.tar.gz:
Publisher:
pypi-publish.yml on call518/MCP-PostgreSQL-Ops
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_postgresql_ops-1.1.0.tar.gz -
Subject digest:
e5b8175fb886dcbb8b498da86d53c6a5096b4ed9c1b52f9370161106709a6c07 - Sigstore transparency entry: 419970238
- Sigstore integration time:
-
Permalink:
call518/MCP-PostgreSQL-Ops@10e4165db81386aff97bdf8d0a199f1bd89d22ad -
Branch / Tag:
refs/tags/1.1.0 - Owner: https://github.com/call518
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@10e4165db81386aff97bdf8d0a199f1bd89d22ad -
Trigger Event:
push
-
Statement type:
File details
Details for the file mcp_postgresql_ops-1.1.0-py3-none-any.whl.
File metadata
- Download URL: mcp_postgresql_ops-1.1.0-py3-none-any.whl
- Upload date:
- Size: 23.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8e10a7c26d648a3b46b33cc30f4056d7a90c19f3a941426f856cacd162f9306
|
|
| MD5 |
47f7f9ebe65069ef5f4417f76201092a
|
|
| BLAKE2b-256 |
b94b554354a19a870e90bc68c57f44728c534ce44eaf054d4f8d2ef84224118d
|
Provenance
The following attestation bundles were made for mcp_postgresql_ops-1.1.0-py3-none-any.whl:
Publisher:
pypi-publish.yml on call518/MCP-PostgreSQL-Ops
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_postgresql_ops-1.1.0-py3-none-any.whl -
Subject digest:
f8e10a7c26d648a3b46b33cc30f4056d7a90c19f3a941426f856cacd162f9306 - Sigstore transparency entry: 419970253
- Sigstore integration time:
-
Permalink:
call518/MCP-PostgreSQL-Ops@10e4165db81386aff97bdf8d0a199f1bd89d22ad -
Branch / Tag:
refs/tags/1.1.0 - Owner: https://github.com/call518
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@10e4165db81386aff97bdf8d0a199f1bd89d22ad -
Trigger Event:
push
-
Statement type: