Skip to main content

Chart Library MCP — historical pattern intelligence for AI agents. 20 tools: cohort retrieval (search, cohort, cohort_analyze), memory (symbol_intelligence, similar_cohorts, cohort_compare, discover_picks), realtime news (narrative_pulse, narrative_alerts), single-call brief (get_daily_setups), analysis (analyze, context, explain, portfolio, anchor_fetch, decompose, clusters, live_search), feedback. 25M+ patterns, 19K+ symbols, 10y. Q5-Q1 spread +14.7pp on n=1,260.

Project description

Chart Library MCP Server

PyPI License: MIT Glama Score Tools

Works with: Claude Desktop | Claude Code | ChatGPT | GitHub Copilot | Cursor | VS Code | Any MCP client

Ask your AI agent "what did this chart do last time it looked like this?" and get a real answer — backed by a cohort of historical analogs and the calibrated distribution of what came next.

25M+ pattern embeddings. 10 years of history. 19K+ stocks. One tool call.

> "What does NVDA's chart on 2024-08-05 1h look like historically?"

NVDA · 2024-08-05 · 1h — cohort of 500 historical analogs
(485 with realized 5-day returns)

  Distribution at 5 days forward:
    median:        −1.3%
    p10 ·· p90:    −11.3% ·· +6.8%   (80% empirical band)
    win rate:      44%
    cohort_score:  0.31 (modest)

  Features that separated winners from losers:
    + credit_spread_state = tight
    + macro_state = bullish
    + pct_off_52w_low (further off)
    − vol_regime = low

  Summary: NVDA's 1-hour pattern on 2024-08-05 has 500 historical
  analogs. The cohort's 5-day distribution is bearish-leaning
  (median −1.3%, win rate 44%) — the historical record does NOT
  show this pattern typically resolving bullish. Conditioning on
  tight credit spreads and a bullish macro state would have
  separated the outperformers within the cohort.

A retrieval, not a forecast. No hallucinated predictions. No cherry-picking. Just the empirical record your agent can cite.


Quick Start

pip install chartlibrary-mcp

Claude Desktop (One-Click Install)

Download the chart-library-1.1.1.mcpb extension file and open it with Claude Desktop for automatic installation.

Claude Code

claude mcp add chart-library -- chartlibrary-mcp

Claude Desktop (Manual)

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "chart-library": {
      "command": "chartlibrary-mcp",
      "env": {
        "CHART_LIBRARY_API_KEY": "cl_your_key"
      }
    }
  }
}

Cursor / VS Code

Add to .cursor/mcp.json or VS Code MCP settings:

{
  "servers": {
    "chart-library": {
      "command": "chartlibrary-mcp",
      "env": {
        "CHART_LIBRARY_API_KEY": "cl_your_key"
      }
    }
  }
}

GitHub Copilot (VS Code)

Add to .vscode/mcp.json in your project (this file is already included in the chart-library repos):

{
  "servers": {
    "chart-library": {
      "command": "chartlibrary-mcp",
      "env": {
        "CHART_LIBRARY_API_KEY": "cl_your_key"
      }
    }
  }
}

Copilot Chat will auto-detect the MCP server when you open the project. Use @mcp in Copilot Chat to invoke tools.

ChatGPT (Developer Mode)

ChatGPT connects to MCP servers via remote HTTP endpoints. To set up:

  1. Enable Developer Mode: Go to ChatGPT Settings > Apps > Advanced settings > Developer mode (requires Pro, Plus, Business, Enterprise, or Education plan)
  2. Create a connector: In Settings > Connectors, click Create and enter:
    • Name: Chart Library
    • Description: Historical chart pattern search engine — 25M+ patterns across 19K+ stocks, 10 years of data
    • URL: https://chartlibrary.io/mcp
    • Authentication: No Authentication (or OAuth if using an API key)
  3. Use in conversations: Select "Developer mode" from the Plus menu, choose the Chart Library app, and ask questions like "What does NVDA's chart look like historically?"

Note: The remote endpoint at https://chartlibrary.io/mcp uses Streamable HTTP transport. If you need SSE fallback, use https://chartlibrary.io/mcp/sse.

Remote MCP Endpoint

For any MCP client that supports remote HTTP connections:

https://chartlibrary.io/mcp

This endpoint supports both Streamable HTTP and SSE transports, no local installation required.

Free tier: 200 calls/day, no credit card required. Get an API key at chartlibrary.io/developers or use basic search without one.


What Can Your Agent Do With This?

"Should I be worried about my TSLA position?"

> get_exit_signal("TSLA")

  Signal: HOLD (confidence: 72%)
  Similar patterns that exited early: 3/10 would have avoided a drawdown
  Similar patterns that held: 7/10 gained an additional +2.1% over 5 days
  Recommendation: Pattern suggests continuation. No exit signal triggered.

"What sectors are rotating in right now?"

> get_sector_rotation()

  Leaders (30-day relative strength):
    1. XLK  Technology     +4.2%
    2. XLY  Cons. Disc.    +3.1%
    3. XLC  Communication  +2.8%

  Laggards:
    9. XLU  Utilities      -1.4%
   10. XLP  Cons. Staples  -2.1%
   11. XLRE Real Estate    -3.3%

  Regime: Risk-On (growth > defensives)

"What happens to AMD if SPY drops 3%?"

> run_scenario("AMD", spy_change=-3.0)

  When SPY fell ~3%, AMD historically:
    Median move:  -5.2%
    Best case:    +1.1%
    Worst case:  -11.4%
    Positive:     18% of the time

  AMD shows 1.7x beta to SPY downside moves.

8 Canonical Tools

Chart Library 2.0 consolidates 22 legacy tools into 8 composable primitives. Chain them via cohort_id handles for sub-second refinement without re-running kNN.

Tool What it does
search Entry point. Returns cohort_id + anchor + n_matches for a ticker+date. Feed the handle into cohort, analyze, or explain to chain.
cohort The core primitive. Conditional distribution (p10/p25/p50/p75/p90 + calibrated bands + MAE/MFE + hit rates + survivorship) for a chart pattern, filtered by regime/sector/liquidity/event. One call replaces the legacy get_cohort_distribution, refine_cohort_with_filters, run_scenario, and get_regime_win_rates.
analyze Analytic metrics via metric= enum: anomaly, volume_profile, crowding, correlation_shift, earnings_reaction, pattern_degradation, regime_accuracy.
context Situational data via target=: ticker metadata, market regime + sector rotation, or DB coverage stats.
explain Narrative + rankings via style= enum: filter_ranking (which filter shifts the distribution most), prose (plain-English summary), position_guidance (exit signals), risk_ranking (Sharpe-ranked picks).
portfolio Portfolio-level conditional distribution across holdings. Weight-averages distributions, ranks tail contributors.
anchor_fetch New in 2.0. Lightweight (symbol, date) metadata fetch — sector, market cap, point-in-time regime. Avoids full kNN when you just need context for a ticker.
report_feedback Report errors or suggest improvements.

These tools replace hallucinated "on average this pattern returns X%" with real conditional base rates. See the grounded-base-rates pattern for the full loop.

Typical agent flow

1. search("NVDA 2024-06-18")                          → cohort_id
2. cohort(cohort_id=..., filters={regime:{same_vix_bucket: true}})
                                                       → conditional distribution
3. explain(cohort_id=..., style="filter_ranking")     → which filter matters most
4. cohort(cohort_id=..., filters={...new filter...})  → refined distribution

Legacy tools (deprecated, still callable)

For backward compatibility, these 22 legacy tool names remain in place and are marked deprecated in their MCP annotations. They forward to the canonical tool and will be removed in a future major release. Migrate via the mapping below:

Legacy Replacement
search_charts, search_batch, get_discover_picks search
get_cohort_distribution, refine_cohort_with_filters, run_scenario, get_regime_win_rates, compare_to_peers cohort
detect_anomaly, get_volume_profile, get_crowding, get_earnings_reaction, get_correlation_shift, get_pattern_degradation, get_regime_accuracy analyze (metric=)
get_sector_rotation, get_status, get_market_context context
get_pattern_summary, explain_cohort_filters, get_exit_signal, get_risk_adjusted_picks explain (style=)
get_portfolio_health portfolio
analyze_pattern, get_follow_through, check_ticker search + cohort (+ optional explain)

How It Works

Chart Library indexes a large library of historical chart patterns and exposes them behind a conditional-distribution API. Every query returns sample sizes, percentiles, and calibrated forward-return bands — never a point forecast.

When your agent calls analyze_pattern("NVDA"), the server:

  1. Builds a representation of NVDA's current chart state
  2. Retrieves historically similar patterns
  3. Looks up what happened over the following 1, 3, 5, and 10 days
  4. Returns the distribution + a plain-English summary via Claude Haiku

The result: factual, citation-ready statements like "out of N similar historical patterns, the median 5-day return was X% (80% band [p10, p90])" that your agent can present without hallucinating or hedging.


API Key

Tier Calls/day Price
Sandbox 200 Free
Builder 5,000 $29/mo
Scale 50,000 $99/mo

Get your key at chartlibrary.io/developers.

export CHART_LIBRARY_API_KEY=cl_your_key

Links


Chart Library provides historical pattern data for informational purposes. Not financial advice.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chartlibrary_mcp-3.4.0.tar.gz (20.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chartlibrary_mcp-3.4.0-py3-none-any.whl (20.7 kB view details)

Uploaded Python 3

File details

Details for the file chartlibrary_mcp-3.4.0.tar.gz.

File metadata

  • Download URL: chartlibrary_mcp-3.4.0.tar.gz
  • Upload date:
  • Size: 20.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.2

File hashes

Hashes for chartlibrary_mcp-3.4.0.tar.gz
Algorithm Hash digest
SHA256 f7623615042393c1adf1242c65c484e91e969265276624b6af6d9f3331ecfa3e
MD5 881fab7c5a2c4e5c40e23a39a6fde153
BLAKE2b-256 2e0d94be22ee15be098192897bca3f27996199fecbe8faa5ab8c8e471b132198

See more details on using hashes here.

File details

Details for the file chartlibrary_mcp-3.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for chartlibrary_mcp-3.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ba04716a7280253200433d90d425241941631ce74ab254beb64c7c5bd69140c8
MD5 7381e3cb35f4a569ee0f5320af5ed173
BLAKE2b-256 c00f51810a120c0492a00475021f5b6d1de07107c7f20dbe64b2dc127e6da72c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page