Skip to main content

Model Context Protocol (MCP) server for Apache Spark History Server with job comparison and analytics

Project description

MCP Server for Apache Spark History Server

CI Python 3.12+ MCP License

๐Ÿค– Connect AI agents to Apache Spark History Server for intelligent job analysis and performance monitoring

Transform your Spark infrastructure monitoring with AI! This Model Context Protocol (MCP) server enables AI agents to analyze job performance, identify bottlenecks, and provide intelligent insights from your Spark History Server data.

๐ŸŽฏ What is This?

Spark History Server MCP bridges AI agents with your existing Apache Spark infrastructure, enabling:

  • ๐Ÿ” Query job details through natural language
  • ๐Ÿ“Š Analyze performance metrics across applications
  • ๐Ÿ”„ Compare multiple jobs to identify regressions
  • ๐Ÿšจ Investigate failures with detailed error analysis
  • ๐Ÿ“ˆ Generate insights from historical execution data

๐Ÿ“บ See it in action:

Watch the demo video

๐Ÿ—๏ธ Architecture

graph TB
    A[๐Ÿค– AI Agent/LLM] --> F[๐Ÿ“ก MCP Client]
    B[๐Ÿฆ™ LlamaIndex Agent] --> F
    C[๐ŸŒ LangGraph] --> F
    D[๏ฟฝ๏ธ Claudep Desktop] --> F
    E[๐Ÿ› ๏ธ Amazon Q CLI] --> F

    F --> G[โšก Spark History MCP Server]

    G --> H[๐Ÿ”ฅ Prod Spark History Server]
    G --> I[๐Ÿ”ฅ Staging Spark History Server]
    G --> J[๐Ÿ”ฅ Dev Spark History Server]

    H --> K[๐Ÿ“„ Prod Event Logs]
    I --> L[๐Ÿ“„ Staging Event Logs]
    J --> M[๐Ÿ“„ Dev Event Logs]

๐Ÿ”— Components:

  • ๐Ÿ”ฅ Spark History Server: Your existing infrastructure serving Spark event data
  • โšก MCP Server: This project - provides MCP tools for querying Spark data
  • ๐Ÿค– AI Agents: LangChain, custom agents, or any MCP-compatible client

โšก Quick Start

๐Ÿ“‹ Prerequisites

  • ๐Ÿ”ฅ Existing Spark History Server (running and accessible)
  • ๐Ÿ Python 3.12+
  • โšก uv package manager

๐Ÿš€ Setup & Testing

git clone https://github.com/DeepDiagnostix-AI/mcp-apache-spark-history-server.git
cd mcp-apache-spark-history-server

# Install Task (if not already installed)
brew install go-task  # macOS, see https://taskfile.dev/installation/ for others

# Setup and start testing
task start-spark-bg            # Start Spark History Server with sample data (default Spark 3.5.5)
# Or specify a different Spark version:
# task start-spark-bg spark_version=3.5.2
task start-mcp-bg             # Start MCP Server

# Optional: Opens MCP Inspector on http://localhost:6274 for interactive testing
# Requires Node.js: 22.7.5+ (Check https://github.com/modelcontextprotocol/inspector for latest requirements)
task start-inspector-bg       # Start MCP Inspector

# When done, run `task stop-all`

If you just want to run the MCP server without cloning the repository:

# Run with uv without installing the module
uvx --from mcp-apache-spark-history-server spark-mcp

# OR run with pip and python. Use of venv is highly encouraged.
python3 -m venv spark-mcp && source spark-mcp/bin/activate
pip install mcp-apache-spark-history-server
python3 -m spark_history_mcp.core.main
# Deactivate venv
deactivate

๐Ÿ“Š Sample Data

The repository includes real Spark event logs for testing:

  • spark-bcec39f6201b42b9925124595baad260 - โœ… Successful ETL job
  • spark-110be3a8424d4a2789cb88134418217b - ๐Ÿ”„ Data processing job
  • spark-cc4d115f011443d787f03a71a476a745 - ๐Ÿ“ˆ Multi-stage analytics job

See TESTING.md for using them.

โš™๏ธ Server Configuration

Edit config.yaml for your Spark History Server:

servers:
  local:
    default: true
    url: "http://your-spark-history-server:18080"
    auth:  # optional
      username: "user"
      password: "pass"
mcp:
  transports:
    - streamable-http # streamable-http or stdio.
  port: "18888"
  debug: true

๐Ÿ“ธ Screenshots

๐Ÿ” Get Spark Application

Get Application

โšก Job Performance Comparison

Job Comparison

๐Ÿ› ๏ธ Available Tools

Note: These tools are subject to change as we scale and improve the performance of the MCP server.

The MCP server provides 17 specialized tools organized by analysis patterns. LLMs can intelligently select and combine these tools based on user queries:

๐Ÿ“Š Application Information

Basic application metadata and overview

๐Ÿ”ง Tool ๐Ÿ“ Description
get_application ๐Ÿ“Š Get detailed information about a specific Spark application including status, resource usage, duration, and attempt details

๐Ÿ”— Job Analysis

Job-level performance analysis and identification

๐Ÿ”ง Tool ๐Ÿ“ Description
list_jobs ๐Ÿ”— Get a list of all jobs for a Spark application with optional status filtering
list_slowest_jobs โฑ๏ธ Get the N slowest jobs for a Spark application (excludes running jobs by default)

โšก Stage Analysis

Stage-level performance deep dive and task metrics

๐Ÿ”ง Tool ๐Ÿ“ Description
list_stages โšก Get a list of all stages for a Spark application with optional status filtering and summaries
list_slowest_stages ๐ŸŒ Get the N slowest stages for a Spark application (excludes running stages by default)
get_stage ๐ŸŽฏ Get information about a specific stage with optional attempt ID and summary metrics
get_stage_task_summary ๐Ÿ“Š Get statistical distributions of task metrics for a specific stage (execution times, memory usage, I/O metrics)

๐Ÿ–ฅ๏ธ Executor & Resource Analysis

Resource utilization, executor performance, and allocation tracking

๐Ÿ”ง Tool ๐Ÿ“ Description
list_executors ๐Ÿ–ฅ๏ธ Get executor information with optional inactive executor inclusion
get_executor ๐Ÿ” Get information about a specific executor including resource allocation, task statistics, and performance metrics
get_executor_summary ๐Ÿ“ˆ Aggregates metrics across all executors (memory usage, disk usage, task counts, performance metrics)
get_resource_usage_timeline ๐Ÿ“… Get chronological view of resource allocation and usage patterns including executor additions/removals

โš™๏ธ Configuration & Environment

Spark configuration, environment variables, and runtime settings

๐Ÿ”ง Tool ๐Ÿ“ Description
get_environment โš™๏ธ Get comprehensive Spark runtime configuration including JVM info, Spark properties, system properties, and classpath

๐Ÿ”Ž SQL & Query Analysis

SQL performance analysis and execution plan comparison

๐Ÿ”ง Tool ๐Ÿ“ Description
list_slowest_sql_queries ๐ŸŒ Get the top N slowest SQL queries for an application with detailed execution metrics
compare_sql_execution_plans ๐Ÿ” Compare SQL execution plans between two Spark jobs, analyzing logical/physical plans and execution metrics

๐Ÿšจ Performance & Bottleneck Analysis

Intelligent bottleneck identification and performance recommendations

๐Ÿ”ง Tool ๐Ÿ“ Description
get_job_bottlenecks ๐Ÿšจ Identify performance bottlenecks by analyzing stages, tasks, and executors with actionable recommendations

๐Ÿ”„ Comparative Analysis

Cross-application comparison for regression detection and optimization

๐Ÿ”ง Tool ๐Ÿ“ Description
compare_job_environments โš™๏ธ Compare Spark environment configurations between two jobs to identify differences in properties and settings
compare_job_performance ๐Ÿ“ˆ Compare performance metrics between two Spark jobs including execution times, resource usage, and task distribution

๐Ÿค– How LLMs Use These Tools

Query Pattern Examples:

  • "Why is my job slow?" โ†’ get_job_bottlenecks + list_slowest_stages + get_executor_summary
  • "Compare today vs yesterday" โ†’ compare_job_performance + compare_job_environments
  • "What's wrong with stage 5?" โ†’ get_stage + get_stage_task_summary
  • "Show me resource usage over time" โ†’ get_resource_usage_timeline + get_executor_summary
  • "Find my slowest SQL queries" โ†’ list_slowest_sql_queries + compare_sql_execution_plans

๐Ÿ“” AWS Integration Guides

If you are an existing AWS user looking to analyze your Spark Applications, we provide detailed setup guides for:

These guides provide step-by-step instructions for setting up the Spark History Server MCP with your AWS services.

๐Ÿš€ Kubernetes Deployment

Deploy using Kubernetes with Helm:

โš ๏ธ Work in Progress: We are still testing and will soon publish the container image and Helm registry to GitHub for easy deployment.

# ๐Ÿ“ฆ Deploy with Helm
helm install spark-history-mcp ./deploy/kubernetes/helm/spark-history-mcp/

# ๐ŸŽฏ Production configuration
helm install spark-history-mcp ./deploy/kubernetes/helm/spark-history-mcp/ \
  --set replicaCount=3 \
  --set autoscaling.enabled=true \
  --set monitoring.enabled=true

๐Ÿ“š See deploy/kubernetes/helm/ for complete deployment manifests and configuration options.

๐ŸŒ Multi-Spark History Server Setup

Setup multiple Spark history servers in the config.yaml and choose which server you want the LLM to interact with for each query.

servers:
  production:
    default: true
    url: "http://prod-spark-history:18080"
    auth:
      username: "user"
      password: "pass"
  staging:
    url: "http://staging-spark-history:18080"

๐Ÿ’ User Query: "Can you get application <app_id> using production server?"

๐Ÿค– AI Tool Request:

{
  "app_id": "<app_id>",
  "server": "production"
}

๐Ÿค– AI Tool Response:

{
  "id": "<app_id>>",
  "name": "app_name",
  "coresGranted": null,
  "maxCores": null,
  "coresPerExecutor": null,
  "memoryPerExecutorMB": null,
  "attempts": [
    {
      "attemptId": null,
      "startTime": "2023-09-06T04:44:37.006000Z",
      "endTime": "2023-09-06T04:45:40.431000Z",
      "lastUpdated": "2023-09-06T04:45:42Z",
      "duration": 63425,
      "sparkUser": "spark",
      "appSparkVersion": "3.3.0",
      "completed": true
    }
  ]
}

๐Ÿ” Environment Variables

SHS_MCP_PORT - Port for MCP server (default: 18888)
SHS_MCP_DEBUG - Enable debug mode (default: false)
SHS_MCP_ADDRESS - Address for MCP server (default: localhost)
SHS_MCP_TRANSPORT - MCP transport mode (default: streamable-http)
SHS_SERVERS_*_URL - URL for a specific server
SHS_SERVERS_*_AUTH_USERNAME - Username for a specific server
SHS_SERVERS_*_AUTH_PASSWORD - Password for a specific server
SHS_SERVERS_*_AUTH_TOKEN - Token for a specific server
SHS_SERVERS_*_VERIFY_SSL - Whether to verify SSL for a specific server (true/false)
SHS_SERVERS_*_TIMEOUT - HTTP request timeout in seconds for a specific server (default: 30)
SHS_SERVERS_*_EMR_CLUSTER_ARN - EMR cluster ARN for a specific server

๐Ÿค– AI Agent Integration

Quick Start Options

Integration Transport Best For
Local Testing HTTP Development, testing tools
Claude Desktop STDIO Interactive analysis
Amazon Q CLI STDIO Command-line automation
Kiro HTTP IDE integration, code-centric analysis
LangGraph HTTP Multi-agent workflows
Strands Agents HTTP Multi-agent workflows

๐ŸŽฏ Example Use Cases

๐Ÿ” Performance Investigation

๐Ÿค– AI Query: "Why is my ETL job running slower than usual?"

๐Ÿ“Š MCP Actions:
โœ… Analyze application metrics
โœ… Compare with historical performance
โœ… Identify bottleneck stages
โœ… Generate optimization recommendations

๐Ÿšจ Failure Analysis

๐Ÿค– AI Query: "What caused job 42 to fail?"

๐Ÿ” MCP Actions:
โœ… Examine failed tasks and error messages
โœ… Review executor logs and resource usage
โœ… Identify root cause and suggest fixes

๐Ÿ“ˆ Comparative Analysis

๐Ÿค– AI Query: "Compare today's batch job with yesterday's run"

๐Ÿ“Š MCP Actions:
โœ… Compare execution times and resource usage
โœ… Identify performance deltas
โœ… Highlight configuration differences

๐Ÿค Contributing

Check CONTRIBUTING.md for full guidelines on contributions

๐Ÿ“„ License

Apache License 2.0 - see LICENSE file for details.

๐Ÿ“ Trademark Notice

This project is built for use with Apache Sparkโ„ข History Server. Not affiliated with or endorsed by the Apache Software Foundation.


๐Ÿ”ฅ Connect your Spark infrastructure to AI agents

๐Ÿš€ Get Started | ๐Ÿ› ๏ธ View Tools | ๐Ÿงช Test Now | ๐Ÿค Contribute

Built by the community, for the community ๐Ÿ’™

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_apache_spark_history_server-0.1.1.tar.gz (33.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file mcp_apache_spark_history_server-0.1.1.tar.gz.

File metadata

File hashes

Hashes for mcp_apache_spark_history_server-0.1.1.tar.gz
Algorithm Hash digest
SHA256 b8d1b39e54908b2a582332e1a702389d36d52f1ba8eb8ba0e7b209a7e3bded44
MD5 4a84fe73c87300cfd8425db471d17772
BLAKE2b-256 c02f1ed2d57c3021135f75f0520a30ffcce2c04745153d43fff401666d11822f

See more details on using hashes here.

File details

Details for the file mcp_apache_spark_history_server-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_apache_spark_history_server-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a49ed0fd9bf8362b1c3471c15ab5c40b2e965749f4865e193eb09963436e7bab
MD5 baa6dce8ba6fad186566cf5b73528ac5
BLAKE2b-256 2e1dbe2873f97e0495891b8d575c478ecfa318f6dc64931a88894c3cb7cc2697

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page