Skip to main content

Code Prism - MCP server for intelligent codebase exploration with multi-LLM support and dependency analysis

Project description

Prism - AI-Powered Codebase Explorer

Prism - Refracting code into structured dependencies

Give AI the bird's eye view it's missing. Prism is an MCP server that helps AI assistants like Claude Code understand codebases from the top down, not the bottom up.

The Problem: Tunnel Vision

When AI explores code, it sees files in isolation. It reads one file, then another, then another - like looking at a city through a drinking straw. Each file loads into context separately, with no map showing how they connect.

Import statements are buried in the code as text. The AI has to mentally parse "from auth import login" and guess where auth.py is, what it does, and how the pieces fit together.

This leads to:

  • Fragmented understanding - Knows individual components but not the system
  • Missed context - Doesn't see what calls what, or why something exists
  • Wrong assumptions - Guesses at architecture from code alone instead of seeing the actual structure
  • Inefficient exploration - Reads the wrong files, misses the important ones
  • Sequential bottleneck - Can only ask one question at a time, losing the bigger picture

The Solution: Hierarchical Understanding

Prism gives AI a lens that zooms out by building a dependency graph of your entire codebase. It analyzes import statements to create two indexes:

  • Forward index: what each file imports (dependencies)
  • Reverse index: what files import it (dependents)

This graph is built once and cached, making all lookups instant. Now AI can:

  1. See the big picture - Load entire subsystems (file + all dependencies) into context at once
  2. Follow the flow - Trace execution paths through the actual dependency graph, not guesswork
  3. Think hierarchically - Start with architecture maps, then drill down with full context
  4. Navigate intelligently - Know which files matter because it sees the connections

Instead of reading files randomly, AI gets structural wisdom first, then specifics. It's not searching for code—it's exploring a mapped system.

Macro → Micro Exploration

Prism enables a natural exploration flow by creating snapshots - bundles of related code with their dependency maps:

Macro Level (Bird's Eye View)

  • "Show me the overall architecture"
  • "What are the main components and how do they interact?"
  • "Where does authentication fit in the system?"

Prism loads a high-level snapshot with entry points + system overview into AI context

Middle Level (Connections)

  • "What files handle user authentication?"
  • "How does data flow from API to database?"
  • "Which components depend on this module?"

Prism creates a focused snapshot: target file + all parents (who imports it) + all children (what it imports)

Micro Level (Implementation)

  • "Show me the actual login function"
  • "What's the specific error handling logic?"
  • "Read the complete source for these 3 files"

AI already has full context from the snapshot and can ask multiple detailed questions in parallel

At each level, the dependency graph tells AI exactly what to look at next for deeper understanding.

Quick Install for Claude Code

Step 1: Get an API Key (30 seconds)

Pick a provider and grab an API key:

Provider Speed Quality Cost Sign Up
Groq Fastest Good Free (1M tokens/day) console.groq.com
Cerebras Very Fast Good Free (1M tokens/day) inference.cerebras.ai
DeepSeek Fast Excellent $0.14/1M tokens platform.deepseek.com
Anthropic Medium Best $3/1M tokens console.anthropic.com
OpenAI Medium Excellent $2.50/1M tokens platform.openai.com

Quick start with Groq (free):

export GROQ_API_KEY="gsk-..."

Step 2: Install Prism

Add to ~/.claude/config.json:

{
  "mcpServers": {
    "prism": {
      "command": "uvx",
      "args": ["code-prism"],
      "env": {
        "GROQ_API_KEY": "gsk-..."
      }
    }
  }
}

Or use Claude's built-in command:

claude mcp add prism --env GROQ_API_KEY=gsk-... -- uvx code-prism

Done! Prism auto-detects your provider (free providers first) and works across all your projects.

How It Works

Prism transforms isolated file reading into structural exploration through dependency-aware snapshots:

🎯 Structural Understanding

The dependency graph shows AI how your codebase is organized:

  • What are the entry points (where execution begins)?
  • Which files import this module? (via reverse index)
  • What does this file depend on? (via forward index)
  • How does code flow from user request to response? (traced through the graph)

Traditional AI: reads files one-by-one, guesses at connections Prism AI: sees the dependency map, knows the actual architecture

🧠 AI-Powered Analysis with Full Context

When you query a file, Prism bundles it with all related code into one snapshot:

  • Target file + all parents + all children + dependency relationships
  • Entire bundle loaded into AI context in one shot
  • AI can now ask 5-10 questions in parallel about the same context

Ask high-level questions and get architectural answers:

  • "How does authentication work in this system?"
  • "Where is caching implemented and why?"
  • "What's the error handling strategy?"
  • "How do these components communicate?"

The AI traces through actual code flow in the snapshot, not guesses.

📚 Smart File Recommendations

Instead of guessing which files to read, AI gets guidance from the graph:

  • See code snippets in context of their dependencies
  • Get ranked list of files that answer your question
  • Choose to read full source only for what matters
  • Suggestions based on actual imports, not keyword matching

🗺️ Multiple Exploration Modes

Different tools for different levels of understanding:

  • Query the structure - Instant dependency lookups (no AI needed)
  • Ask questions - Load snapshot + ask in parallel (bulk context)
  • Read strategically - Only dive deep where the graph says it matters

Example Workflows

Understanding a New Codebase

You: "I need to understand how authentication works"
Claude uses Prism to:
1. Map out auth-related files and their relationships
2. Explain the high-level authentication flow
3. Identify the 3-4 key files you should read
4. Show you those files with full context

Debugging a Complex Issue

You: "Users are getting intermittent 500 errors on checkout"
Claude uses Prism to:
1. Trace the checkout flow from API to database
2. Identify error handling at each layer
3. Find where exceptions might go unhandled
4. Point you to the specific problematic code

Planning a Feature

You: "I want to add rate limiting"
Claude uses Prism to:
1. Show where API requests are handled
2. Identify existing middleware patterns
3. Find similar features (like auth) for reference
4. Suggest where rate limiting should be inserted

What Makes Prism Different

Traditional code search: Find where "login" appears → read all those files → try to piece together understanding

Prism approach: Query authentication system → get snapshot (auth.py + all dependencies + graph) → ask 5 questions in parallel → understand the architecture

It's the difference between:

  • Bottom-up (random access, tunnel vision, local understanding)

    • Read file A, then file B, then file C sequentially
    • AI guesses how they connect based on code alone
    • Context window fills with isolated code snippets
  • Top-down (hierarchical, bird's eye view, global understanding)

    • Load snapshot: A + B + C + dependency map in one context
    • AI sees explicit connections via the graph
    • Ask multiple architectural questions in parallel
    • Dependency graph turns implicit imports into explicit structural knowledge

LLM Provider Configuration

All Supported Providers:

Provider Env Variable Default Model Notes
Groq GROQ_API_KEY llama-3.3-70b-versatile Free, fastest
Cerebras CEREBRAS_API_KEY llama-3.3-70b Free
SambaNova SAMBANOVA_API_KEY Meta-Llama-3.1-70B-Instruct Free tier
DeepSeek DEEPSEEK_API_KEY deepseek-chat Cheap, excellent
Anthropic ANTHROPIC_API_KEY claude-sonnet-4-20250514 Best quality
OpenAI OPENAI_API_KEY gpt-4o Excellent
Google GEMINI_API_KEY gemini-2.0-flash-exp Fast
xAI XAI_API_KEY grok-2-latest Large context
Mistral MISTRAL_API_KEY mistral-large-latest EU-based
Cohere COHERE_API_KEY command-r-plus Enterprise
Perplexity PERPLEXITYAI_API_KEY llama-3.1-sonar-large-128k-online Online search
Together AI TOGETHERAI_API_KEY meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo Many models
Fireworks FIREWORKS_AI_API_KEY accounts/fireworks/models/llama-v3p1-70b-instruct Fast inference
OpenRouter OPENROUTER_API_KEY Any model Meta-aggregator
Replicate REPLICATE_API_KEY Various Community models

Cloud Providers:

  • Azure OpenAI: AZURE_API_KEY, AZURE_API_BASE, AZURE_API_VERSION
  • AWS Bedrock: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION_NAME
  • Google Vertex AI: GOOGLE_APPLICATION_CREDENTIALS, VERTEXAI_PROJECT, VERTEXAI_LOCATION

Custom Configuration:

export PRISM_LLM_PROVIDER="groq"           # Override auto-detection
export PRISM_LLM_MODEL="llama-3.3-70b"     # Use specific model
export PRISM_PROJECT_ROOT="/path/to/code"  # Override project path

Auto-Detection: Just set an API key - Prism tries free providers first, then paid. See LLM_SETUP.md for detailed setup.

Multi-Project Support

The MCP server is installed globally but works with multiple projects:

  • Each Claude Code session runs in its working directory (e.g., your project folder)
  • Prism uses the current working directory as the project root
  • Config and cache are stored in {working_directory}/.zdeps_cache/
  • Multiple agents can run simultaneously, each analyzing their own project
  • No conflicts between different projects

Example: If you have Claude Code open in /Users/you/project-a and /Users/you/project-b, each gets:

  • Separate config: /Users/you/project-a/.zdeps_cache/config.json and /Users/you/project-b/.zdeps_cache/config.json
  • Separate cache: /Users/you/project-a/.zdeps_cache/semantic_cache.json and /Users/you/project-b/.zdeps_cache/semantic_cache.json
  • One shared MCP server handling both

Advanced: You can manually override the project root with PRISM_PROJECT_ROOT environment variable if needed, but this is not necessary for Claude Code users.

Tools Available to AI

Prism exposes these capabilities:

  • query_codebase - Explore any part of the codebase with AI understanding
  • query_snapshot_batch - Ask multiple related questions at different levels
  • preview_snapshot - Check scope before exploring (how many files, tokens, etc.)
  • get_dependency_info - Quick lookup of structural relationships
  • list_entry_points - See where execution begins in your project

Features

🔍 Hierarchical Exploration

Understand systems from top to bottom:

  • Start with architecture and patterns
  • Drill down through layers
  • See connections between components
  • Follow execution flows

🤖 Intelligent Questioning

Ask conceptual questions, get architectural answers:

  • No need to know file names
  • Focus on what/how/why, not where
  • Get explanations with supporting code
  • Understand patterns across the codebase

⚡ Exploration Efficiency

Don't read everything, read what matters:

  • AI learns which files are important
  • Skip boilerplate and focus on logic
  • Get recommendations at each level
  • Build understanding incrementally

📁 Context-Aware File Reading

When you do read files, understand why they matter:

  • See how files fit in the bigger picture
  • Understand what calls them and what they call
  • Know their role in the architecture
  • Read with full context, not in isolation

🌐 Flexible LLM Support

Works with any LLM provider you choose:

  • Free options: Groq, Cerebras, SambaNova (no credit card!)
  • Paid options: Claude, GPT-4o, Gemini, Grok, and 10+ more
  • Auto-detection: Just set an API key, Prism handles the rest
  • Local support: Run Ollama for 100% private, offline analysis

Supported Languages

Currently optimized for Python projects. The structural analysis works for any language, but AI-powered understanding is best with Python.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

code_prism-0.2.2-py3-none-any.whl (73.9 kB view details)

Uploaded Python 3

File details

Details for the file code_prism-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: code_prism-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 73.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for code_prism-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fa47cd983f4bdb7a44d83b0ab880d82fa392354dce4f9565e9c04d1a1afc06b3
MD5 db8c9ad54445a802b9b549583d926b66
BLAKE2b-256 508f527893c88dc994da408f04565449638df074cd445ad0eabded3f4cc15f2b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page