A Model-Context Protocol server for Amazon Redshift.
Project description
redshift-comment-mcp
A read-only Model Context Protocol server for Amazon Redshift, plus a Claude Code plugin with 5 slash-command skills built on top. Designed around one assertion: column names lie, comments don't — so the server exposes comments aggressively and the skills compose those tools into the discovery workflows you actually do every day.
"What values does dbt_marts.fct_orders.status really hold?"
→ /redshift-profile dbt_marts.fct_orders status
→ cardinality, top-N, null rate, min/max, existing comment — one round.
Why this exists
If you've ever opened an unfamiliar Redshift table and squinted at
column names like f3, legacy_id_v2, or status (which status?),
you already know the pain. dbt manifests are too narrow. Web GUIs
are too slow. Hand-written SQL is too repetitive.
This plugin's charter is Guided Data Discovery:
- Comments first. Every list / search tool returns the column, table, or schema comment when asked — names are advisory, comments are authoritative.
- Read-only by construction.
execute_sqlrejects DDL / DML at the parse layer; no skill in this repo can mutate Redshift. - MCP-composed skills. New workflows are built by stringing together existing tools, not by adding new database connections.
- No persistence. No synthesis layer, no
.redshift-wiki/markdown, no stale tracking. (Cache is rebuildable; that's different.) Persistence belongs in a separate plugin.
See implementation_guide.md §1.2 for the
full charter.
What you get
MCP tools (11, defined in src/redshift_comment_mcp/)
| Group | Tools |
|---|---|
| List | list_schemas · list_tables · list_columns |
| Search (hit-count ranked) | search_schemas · search_tables · search_columns |
| Comment retrieval | get_schema_comment · get_table_comment · get_column_comment · get_all_column_comments |
| Query | execute_sql (SELECT / WITH only) |
Pagination on every list / search; explicit WARNING strings nudging
the LLM to read comments before trusting names.
Slash-command skills (5, defined in skills/)
| Skill | One-liner | Since |
|---|---|---|
| /redshift-setup | Conversational walk-through to configure a connection profile. | v0.2.0 |
| /redshift-profile | Profile a column: cardinality / top-N / null rate / min-max / existing comment, one round. | v0.3.0 |
| /redshift-cache-schema | LLM-internal cache: dumps cluster structure to local files for faster metadata lookups in subsequent skill invocations. | v0.3.0 |
| /redshift-explore | Three-step interactive wizard (schema → table → column) — pick by reading comments. | v0.3.0 |
| /redshift-lineage-from-stl | Mine STL_QUERY + sqlglot to reconstruct actual table-to-table lineage from query history. |
v0.3.0 |
Each skill has its own tri-lingual README inside its folder.
Quick start
The fastest path is the Claude Code plugin.
# 1. Register the marketplace (one-time)
claude plugin marketplace add kouko/redshift-comment-mcp
# 2. Install the plugin
claude plugin install redshift-comment-mcp
# 3. Configure a connection profile (in a Claude Code chat)
/redshift-setup
/redshift-setup walks you through host / port / user / dbname /
password. The password is collected in a system dialog (macOS) or a
zenity prompt (Linux desktop) or your own terminal (headless) — never
in chat. It lands directly in your OS keychain.
After setup, just type any of the slash commands above. Multi-cluster?
Run /redshift-setup again with a different profile name.
For Claude Desktop / other MCP clients / local development, see
docs/install.md (or scroll down to Other install
paths).
Other install paths
| Scenario | How |
|---|---|
| Claude Code (recommended) | claude plugin install redshift-comment-mcp (above) |
| Claude Desktop / generic MCP client | pip install redshift-comment-mcp then point your client at uvx redshift-comment-mcp --profile default |
| Local development | git clone … && pip install -e ".[dev]" then python -m redshift_comment_mcp.server --profile default |
| Multi-cluster | One profile per cluster: redshift-comment-mcp setup --profile prod |
The plugin runs from the cloned repo source via
uv run --project ${CLAUDE_PLUGIN_ROOT} — PyPI release is NOT a
prerequisite for plugin updates.
Where things live
.
├── README.md / README.ja.md / README.zh-TW.md (this file, tri-lingual)
├── implementation_guide.md design rationale + charter
├── src/redshift_comment_mcp/ MCP server source — see its own README
├── skills/ 5 slash-command skills — see its own README
├── commands/ plugin slash command stubs
├── tests/ pytest suite
├── pyproject.toml packaging metadata
└── .claude-plugin/ plugin manifest + marketplace
The two READMEs to read next:
skills/README.md— overview of all 5 skillssrc/redshift_comment_mcp/README.md— server internals, module map, charter constraints
Data layout at runtime
| Path | Contents | Permissions |
|---|---|---|
~/.config/redshift-comment-mcp/config.toml |
Non-secret profile fields | 0600 |
OS keychain (redshift-comment-mcp / <profile>) |
Passwords | OS-managed |
~/.cache/redshift-comment-mcp/<profile>/ |
Optional offline structure cache (written by /redshift-cache-schema) |
0700 |
Recommended DB GRANTs (defense-in-depth)
execute_sql blocks DDL / DML / admin keywords at the parser layer
(DROP / DELETE / UPDATE / INSERT / ALTER / CREATE /
TRUNCATE / MERGE / GRANT / REVOKE / COPY / UNLOAD), but
that's a layer-1 defense. The defense-in-depth move is to give the
plugin's connecting Redshift user read-only privileges only, so
even if a parser bypass is found, the database itself rejects writes:
-- Create a dedicated read-only user for the plugin
CREATE USER redshift_mcp_reader WITH PASSWORD '...';
-- Grant only what the plugin actually needs
GRANT USAGE ON SCHEMA public, dbt_marts, dbt_staging TO redshift_mcp_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA public, dbt_marts, dbt_staging TO redshift_mcp_reader;
ALTER DEFAULT PRIVILEGES IN SCHEMA public, dbt_marts, dbt_staging
GRANT SELECT ON TABLES TO redshift_mcp_reader;
-- Do NOT grant: INSERT / UPDATE / DELETE / TRUNCATE / DROP / CREATE / GRANT / superuser
For /redshift-lineage-from-stl, the user additionally needs
SYSLOG ACCESS UNRESTRICTED (or admin) to read STL_QUERY /
SYS_QUERY_HISTORY. If you're not running that skill, skip this grant.
Known limits
MCP response token cap (~25K tokens default) — Claude Code silently
truncates MCP tool results above ~25,000 tokens (no error, no marker;
see anthropics/claude-code#2638).
For dbt-rich schemas where column comments are long markdown blocks, a
single list_columns(include_comments=True) page (50 rows) on a wide
table can approach this. Two mitigations the plugin already applies:
include_commentsdefaults to False onlist_tables/list_columns(onlylist_schemasdefaults True since schema count is small) — agent must opt in to comment-loaded responses./redshift-cache-schemawrites per-table.mdfiles that consumers Read directly via the Read tool, bypassing the MCP response path entirely. Once primed, metadata lookups are not size-bound.
If you still need to bump the cap (e.g. to fetch a heavily-documented
column set in one shot), set MAX_MCP_OUTPUT_TOKENS=50000 in the
environment where Claude Code runs. This affects all MCP servers in
that session, not just this one.
Comment-writing tips for your DB
The plugin shines brightest on tables whose owners invest in comments. Concrete tips (Chinese examples — adapt to your team's language):
COMMENT ON SCHEMA sales IS '[用途] 線上零售銷售數據 [主要實體] 訂單, 客戶, 產品';
COMMENT ON TABLE sales.orders IS '[實體] 訂單 [PK] order_id [FK] customer_id → customers.customer_id';
COMMENT ON COLUMN sales.orders.revenue IS '[定義] 訂單總銷售額 [語意類型] Metric [單位] 新台幣 [計算] 未稅商品總價 + 稅 − 折扣';
A more thorough Semantic Layer guide is in
implementation_guide.md Appendix A.
Development
pytest tests/ # run tests
python -m build # build sdist + wheel
CI / release flow lives in .github/.
License
Contributing
Issues and pull requests welcome. New skills should follow the
patterns documented in skills/README.md:
read-only, MCP-composed, no direct DB connections, no synthesis
layer. SKILL.md ≤ 130 lines, tri-lingual README, audited via
dev-workflow:skill-judge before commit.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file redshift_comment_mcp-0.3.0.tar.gz.
File metadata
- Download URL: redshift_comment_mcp-0.3.0.tar.gz
- Upload date:
- Size: 114.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
408bc1cd5933c7ed077e6757c9bdf442f29dd414c6f8c2a98bf56b02c15f21da
|
|
| MD5 |
1ac852b1cbfb82ad23474810d1afab59
|
|
| BLAKE2b-256 |
1030f89522c79508fb0053102de7c9a93a9010b0f5ebc38533f029928df7711b
|
File details
Details for the file redshift_comment_mcp-0.3.0-py3-none-any.whl.
File metadata
- Download URL: redshift_comment_mcp-0.3.0-py3-none-any.whl
- Upload date:
- Size: 29.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8cd401a7f04e9e6a17d51d15400fd00e3b69524712aebe6c6fa3cd22d0b6c71f
|
|
| MD5 |
735ed38baea3b0bfbea9c28f77c7b788
|
|
| BLAKE2b-256 |
fb044622e0f9775133449612620fd3b6164a3bf5de12951294160ac653d28f17
|