MCP Server for Apache Spark History Server. The bridge between Agentic AI and Apache Spark.
README
Kubeflow Spark History MCP Server
π€ Connect AI agents to Apache Spark History Server for intelligent job analysis and performance monitoring
Transform your Spark infrastructure monitoring with AI! This Model Context Protocol (MCP) server enables AI agents to analyze job performance, identify bottlenecks, and provide intelligent insights from your Spark History Server data.
π― What is This?
Spark History Server MCP bridges AI agents with your existing Apache Spark infrastructure, enabling:
π Query job details through natural language
π Analyze performance metrics across applications
π Compare multiple jobs to identify regressions
π¨ Investigate failures with detailed error analysis
π Generate insights from historical execution data
graph TB
A[π€ AI Agent/LLM] --> F[π‘ MCP Client]
B[π¦ LlamaIndex Agent] --> F
C[π LangGraph] --> F
D[οΏ½οΈ Claudep Desktop] --> F
E[π οΈ Amazon Q CLI] --> F
F --> G[β‘ Spark History MCP Server]
G --> H[π₯ Prod Spark History Server]
G --> I[π₯ Staging Spark History Server]
G --> J[π₯ Dev Spark History Server]
H --> K[π Prod Event Logs]
I --> L[π Staging Event Logs]
J --> M[π Dev Event Logs]
Loading
π Components:
π₯ Spark History Server: Your existing infrastructure serving Spark event data
β‘ MCP Server: This project - provides MCP tools for querying Spark data
π€ AI Agents: LangChain, custom agents, or any MCP-compatible client
git clone https://github.com/kubeflow/mcp-apache-spark-history-server.git
cd mcp-apache-spark-history-server
# Install Task (if not already installed)
brew install go-task # macOS, see https://taskfile.dev/installation/ for others# Setup and start testing
task start-spark-bg # Start Spark History Server with sample data (default Spark 3.5.5)# Or specify a different Spark version:# task start-spark-bg spark_version=3.5.2
task start-mcp-bg # Start MCP Server# Optional: Opens MCP Inspector on http://localhost:6274 for interactive testing# Requires Node.js: 22.7.5+ (Check https://github.com/modelcontextprotocol/inspector for latest requirements)
task start-inspector-bg # Start MCP Inspector# When done, run `task stop-all`
If you just want to run the MCP server without cloning the repository:
# Run with uv without installing the module
uvx --from mcp-apache-spark-history-server spark-mcp
# OR run with pip and python. Use of venv is highly encouraged.
python3 -m venv spark-mcp &&source spark-mcp/bin/activate
pip install mcp-apache-spark-history-server
python3 -m spark_history_mcp.core.main
# Deactivate venv
deactivate
βοΈ Server Configuration
Edit config.yaml for your Spark History Server:
Config File Options:
Command line: --config /path/to/config.yaml or -c /path/to/config.yaml
Note: These tools are subject to change as we scale and improve the performance of the MCP server.
The MCP server provides 18 specialized tools organized by analysis patterns. LLMs can intelligently select and combine these tools based on user queries:
π Application Information
Basic application metadata and overview
π§ Tool
π Description
list_applications
π Get a list of all applications available on the Spark History Server with optional filtering by status, date ranges, and limits
get_application
π Get detailed information about a specific Spark application including status, resource usage, duration, and attempt details
π Job Analysis
Job-level performance analysis and identification
π§ Tool
π Description
list_jobs
π Get a list of all jobs for a Spark application with optional status filtering
list_slowest_jobs
β±οΈ Get the N slowest jobs for a Spark application (excludes running jobs by default)
β‘ Stage Analysis
Stage-level performance deep dive and task metrics
π§ Tool
π Description
list_stages
β‘ Get a list of all stages for a Spark application with optional status filtering and summaries
list_slowest_stages
π Get the N slowest stages for a Spark application (excludes running stages by default)
get_stage
π― Get information about a specific stage with optional attempt ID and summary metrics
get_stage_task_summary
π Get statistical distributions of task metrics for a specific stage (execution times, memory usage, I/O metrics)
π₯οΈ Executor & Resource Analysis
Resource utilization, executor performance, and allocation tracking
π§ Tool
π Description
list_executors
π₯οΈ Get executor information with optional inactive executor inclusion
get_executor
π Get information about a specific executor including resource allocation, task statistics, and performance metrics
get_executor_summary
π Aggregates metrics across all executors (memory usage, disk usage, task counts, performance metrics)
get_resource_usage_timeline
π Get chronological view of resource allocation and usage patterns including executor additions/removals
βοΈ Configuration & Environment
Spark configuration, environment variables, and runtime settings
π§ Tool
π Description
get_environment
βοΈ Get comprehensive Spark runtime configuration including JVM info, Spark properties, system properties, and classpath
π SQL & Query Analysis
SQL performance analysis and execution plan comparison
π§ Tool
π Description
list_slowest_sql_queries
π Get the top N slowest SQL queries for an application with detailed execution metrics and optional plan descriptions
compare_sql_execution_plans
π Compare SQL execution plans between two Spark jobs, analyzing logical/physical plans and execution metrics
π¨ Performance & Bottleneck Analysis
Intelligent bottleneck identification and performance recommendations
π§ Tool
π Description
get_job_bottlenecks
π¨ Identify performance bottlenecks by analyzing stages, tasks, and executors with actionable recommendations
π Comparative Analysis
Cross-application comparison for regression detection and optimization
π§ Tool
π Description
compare_job_environments
βοΈ Compare Spark environment configurations between two jobs to identify differences in properties and settings
compare_job_performance
π Compare performance metrics between two Spark jobs including execution times, resource usage, and task distribution
π€ How LLMs Use These Tools
Query Pattern Examples:
"Show me all applications between 12 AM and 1 AM on 2025-06-27" β list_applications
"Why is my job slow?" β get_job_bottlenecks + list_slowest_stages + get_executor_summary
"Compare today vs yesterday" β compare_job_performance + compare_job_environments
"What's wrong with stage 5?" β get_stage + get_stage_task_summary
"Show me resource usage over time" β get_resource_usage_timeline + get_executor_summary
"Find my slowest SQL queries" β list_slowest_sql_queries + compare_sql_execution_plans
π AWS Integration Guides
If you are an existing AWS user looking to analyze your Spark Applications, we provide detailed setup guides for:
SHS_MCP_PORT - Port for MCP server (default: 18888)
SHS_MCP_DEBUG - Enable debug mode (default: false)
SHS_MCP_ADDRESS - Address for MCP server (default: localhost)
SHS_MCP_TRANSPORT - MCP transport mode (default: streamable-http)
SHS_SERVERS_*_URL - URL for a specific server
SHS_SERVERS_*_AUTH_USERNAME - Username for a specific server
SHS_SERVERS_*_AUTH_PASSWORD - Password for a specific server
SHS_SERVERS_*_AUTH_TOKEN - Token for a specific server
SHS_SERVERS_*_VERIFY_SSL - Whether to verify SSL for a specific server (true/false)
SHS_SERVERS_*_TIMEOUT - HTTP request timeout in seconds for a specific server (default: 30)
SHS_SERVERS_*_EMR_CLUSTER_ARN - EMR cluster ARN for a specific server
SHS_SERVERS_*_INCLUDE_PLAN_DESCRIPTION - Whether to include SQL execution plans by default for a specific server (true/false, default: false)
## What's Changed * sync gh and pypi releases by @nabuskey in https://github.com/kubeflow/mcp-apache-spark-history-server/pull/114 * update q cli doc by @nabuskey in https://github.com/kubeflow/mcp-apache-spark-history-server/pull/115 * Make include_plan_description configurable by @zemin-piao in https://github.com/kubeflow/mcp-apache-spark-history-server/pull/118 * Mark executorMetricsDistributions.peakMemoryMetrics.quantiles Optional by @zemin-piao in https://github.com/kubeflow/mcp-apache-spa
Low
1/13/2026
v0.1.0
# π MCP Apache Spark History Server v0.1.0 Release ## π Initial Release Highlights We're excited to announce the first official release of the MCP Apache Spark History Server! This groundbreaking tool enables AI-powered debugging and optimization of Apache Spark jobs through natural language interactions. ### π€ AI Integration β’ Claude Desktop Integration - Seamless AI-powered Spark job analysis β’ Amazon Q CLI Support - Native AWS AI assistant integration β’ LangGraph Example with O
Low
7/30/2025
Dependencies & License Audit
Loading dependencies...
Similar Packages
hybrid-orchestratorπ€ Implement hybrid human-AI orchestration patterns in Python to coordinate agents, manage sessions, and enable smooth AI-human handoffs.master@2026-04-21
pipulateLocal First AI SEO Software on Nix, FastHTML & HTMXvoice-synthesis-breakthrough
OmnispindleA comprehensive MCP-based todo management system, that serves as a central nervous system for Madness Interactive, a multi-project task coordination workshop.v0.0.9
Charles-mcpCharles Proxy MCP server for AI agents with live capture, structured traffic analysis, and agent-friendly tool contractsv3.0.3
mcp-workspaceMCP Workspace Server: A secure Model Context Protocol server providing file, git, and GitHub tools for AI assistants within a sandboxed project directory.0.1.6