NOTE: We have fully implemented SchemaPin to help combat questionable copies of this project on Github and elsewhere. Be sure validate you are using releases from this repo and can use SchemaPin to validate our tool schemas: https://mockloop.com/.well-known/schemapin.json
The world's first AI-native API testing platform powered by the Model Context Protocol (MCP). MockLoop MCP revolutionizes API testing with comprehensive AI-driven scenario generation, automated test execution, and intelligent analysis capabilities.
🚀 Revolutionary Capabilities: 5 AI Prompts • 15 Scenario Resources • 16 Testing Tools • 10 Context Tools • 4 Core Tools • Complete MCP Integration
📚 Documentation: https://docs.mockloop.com
📦 PyPI Package: https://pypi.org/project/mockloop-mcp/
🐙 GitHub Repository: https://github.com/mockloop/mockloop-mcp
MockLoop MCP represents a paradigm shift in API testing, introducing the world's first AI-native testing architecture that combines:
- 🤖 AI-Driven Test Generation: 5 specialized MCP prompts for intelligent scenario creation
- 📦 Community Scenario Packs: 15 curated testing resources with community architecture
- ⚡ Automated Test Execution: 30 comprehensive MCP tools for complete testing workflows (16 testing + 10 context + 4 core)
- 🔄 Stateful Testing: Advanced context management with GlobalContext and AgentContext
- 📊 Enterprise Compliance: Complete audit logging and regulatory compliance tracking
- 🏗️ Dual-Port Architecture: Eliminates /admin path conflicts with separate mocked API and admin ports
Enterprise-grade compliance and regulatory tracking
- Complete request/response audit trails
- Regulatory compliance monitoring
- Performance metrics and analytics
- Security event logging
Intelligent scenario generation powered by AI
analyze_openapi_for_testing- Comprehensive API analysis for testing strategiesgenerate_scenario_config- Dynamic test scenario configurationoptimize_scenario_for_load- Load testing optimizationgenerate_error_scenarios- Error condition simulationgenerate_security_test_scenarios- Security vulnerability testing
Community-driven testing scenarios with advanced architecture
- Load Testing Scenarios: High-volume traffic simulation
- Error Simulation Packs: Comprehensive error condition testing
- Security Testing Suites: Vulnerability assessment scenarios
- Performance Benchmarks: Standardized performance testing
- Integration Test Packs: Cross-service testing scenarios
- Community Architecture: Collaborative scenario sharing and validation
Complete automated test execution capabilities
validate_scenario_config- Scenario validation and verificationdeploy_scenario- Automated scenario deploymentswitch_scenario- Dynamic scenario switchinglist_active_scenarios- Active scenario monitoring
execute_test_plan- Comprehensive test plan executionrun_test_iteration- Individual test iteration managementrun_load_test- Load testing executionrun_security_test- Security testing automation
analyze_test_results- Intelligent test result analysisgenerate_test_report- Comprehensive reportingcompare_test_runs- Test run comparison and trendsget_performance_metrics- Performance metrics collection
create_test_session- Test session initializationend_test_session- Session cleanup and finalizationschedule_test_suite- Automated test schedulingmonitor_test_progress- Real-time progress monitoring
Advanced state management for complex testing workflows
create_test_session_context- Test session state managementcreate_workflow_context- Complex workflow orchestrationcreate_agent_context- AI agent state management
get_context_data- Context data retrievalupdate_context_data- Dynamic context updateslist_contexts_by_type- Context discovery and listing
create_context_snapshot- State snapshot creationrestore_context_snapshot- State rollback capabilities
get_global_context_data- Cross-session data sharingupdate_global_context_data- Global state management
Get started with the world's most advanced AI-native testing platform:
# 1. Install MockLoop MCP
pip install mockloop-mcp
# 2. Verify installation
mockloop-mcp --version
# 3. Configure with your MCP client (Cline, Claude Desktop, etc.)
# See configuration examples below- Python 3.10+
- Pip package manager
- Docker and Docker Compose (for containerized mock servers)
- An MCP-compatible client (Cline, Claude Desktop, etc.)
# Install the latest stable version
pip install mockloop-mcp
# Or install with optional dependencies
pip install mockloop-mcp[dev] # Development tools
pip install mockloop-mcp[docs] # Documentation tools
pip install mockloop-mcp[all] # All optional dependencies
# Verify installation
mockloop-mcp --version# Clone the repository
git clone https://github.com/mockloop/mockloop-mcp.git
cd mockloop-mcp
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"Add to your Cline MCP settings file:
{
"mcpServers": {
"MockLoopLocal": {
"autoApprove": [],
"disabled": false,
"timeout": 60,
"command": "mockloop-mcp",
"args": [],
"transportType": "stdio"
}
}
}Add to your Claude Desktop configuration:
{
"mcpServers": {
"mockloop": {
"command": "mockloop-mcp",
"args": []
}
}
}For virtual environment installations, use the full Python path:
{
"mcpServers": {
"MockLoopLocal": {
"command": "/path/to/your/venv/bin/python",
"args": ["-m", "mockloop_mcp"],
"transportType": "stdio"
}
}
}Generate sophisticated FastAPI mock servers with dual-port architecture.
Parameters:
spec_url_or_path(string, required): API specification URL or local file pathoutput_dir_name(string, optional): Output directory nameauth_enabled(boolean, optional): Enable authentication middleware (default: true)webhooks_enabled(boolean, optional): Enable webhook support (default: true)admin_ui_enabled(boolean, optional): Enable admin UI (default: true)storage_enabled(boolean, optional): Enable storage functionality (default: true)
Revolutionary Dual-Port Architecture:
- Mocked API Port: Serves your API endpoints (default: 8000)
- Admin UI Port: Separate admin interface (default: 8001)
- Conflict Resolution: Eliminates /admin path conflicts in OpenAPI specs
- Enhanced Security: Port-based access control and isolation
Query and analyze request logs with AI-powered insights.
Parameters:
server_url(string, required): Mock server URLlimit(integer, optional): Maximum logs to return (default: 100)offset(integer, optional): Pagination offset (default: 0)method(string, optional): Filter by HTTP methodpath_pattern(string, optional): Regex pattern for path filteringtime_from(string, optional): Start time filter (ISO format)time_to(string, optional): End time filter (ISO format)include_admin(boolean, optional): Include admin requests (default: false)analyze(boolean, optional): Perform AI analysis (default: true)
AI-Powered Analysis:
- Performance metrics (P95/P99 response times)
- Error rate analysis and categorization
- Traffic pattern detection
- Automated debugging recommendations
- Session correlation and tracking
Intelligent server discovery with dual-port architecture support.
Parameters:
ports(array, optional): Ports to scan (default: common ports)check_health(boolean, optional): Perform health checks (default: true)include_generated(boolean, optional): Include generated mocks (default: true)
Advanced Discovery:
- Automatic architecture detection (single-port vs dual-port)
- Health status monitoring
- Server correlation and matching
- Port usage analysis
Dynamic response management without server restart.
Parameters:
server_url(string, required): Mock server URLoperation(string, required): Operation type ("update_response", "create_scenario", "switch_scenario", "list_scenarios")endpoint_path(string, optional): API endpoint pathresponse_data(object, optional): New response datascenario_name(string, optional): Scenario namescenario_config(object, optional): Scenario configuration
Dynamic Capabilities:
- Real-time response updates
- Scenario-based testing
- Runtime configuration management
- Zero-downtime modifications
MockLoop MCP includes revolutionary proxy capabilities that enable seamless switching between mock and live API environments. This powerful feature transforms your testing workflow by providing:
- 🔄 Seamless Mode Switching: Transition between mock, proxy, and hybrid modes without code changes
- 🎯 Intelligent Routing: Smart request routing based on configurable rules and conditions
- 🔐 Universal Authentication: Support for API Key, Bearer Token, Basic Auth, and OAuth2
- 📊 Response Comparison: Automated comparison between mock and live API responses
- ⚡ Zero-Downtime Switching: Change modes dynamically without service interruption
- All requests handled by generated mock responses
- Predictable, consistent testing environment
- Ideal for early development and isolated testing
- No external dependencies or network calls
- All requests forwarded to live API endpoints
- Real-time data and authentic responses
- Full integration testing capabilities
- Network-dependent operation with live credentials
- Intelligent routing between mock and proxy based on rules
- Conditional switching based on request patterns, headers, or parameters
- Gradual migration from mock to live environments
- A/B testing and selective endpoint proxying
from mockloop_mcp.mcp_tools import create_mcp_plugin
# Create a proxy-enabled plugin
plugin_result = await create_mcp_plugin(
spec_url_or_path="https://api.example.com/openapi.json",
mode="hybrid", # Start with hybrid mode
plugin_name="example_api",
target_url="https://api.example.com",
auth_config={
"auth_type": "bearer_token",
"credentials": {"token": "your-token"}
},
routing_rules=[
{
"pattern": "/api/critical/*",
"mode": "proxy", # Critical endpoints use live API
"priority": 10
},
{
"pattern": "/api/dev/*",
"mode": "mock", # Development endpoints use mocks
"priority": 5
}
]
)- 🔍 Response Validation: Compare mock vs live responses for consistency
- 📈 Performance Monitoring: Track response times and throughput across modes
- 🛡️ Error Handling: Graceful fallback mechanisms and retry policies
- 🎛️ Dynamic Configuration: Runtime mode switching and rule updates
- 📋 Audit Logging: Complete request/response tracking across all modes
The proxy system supports comprehensive authentication schemes:
- API Key: Header, query parameter, or cookie-based authentication
- Bearer Token: OAuth2 and JWT token support
- Basic Auth: Username/password combinations
- OAuth2: Full OAuth2 flow with token refresh
- Custom: Extensible authentication handlers for proprietary schemes
- Development Workflow: Start with mocks, gradually introduce live APIs
- Integration Testing: Validate against real services while maintaining test isolation
- Performance Testing: Compare mock vs live API performance characteristics
- Staging Validation: Ensure mock responses match production API behavior
- Hybrid Deployments: Route critical operations to live APIs, others to mocks
📚 Complete Guide: For detailed configuration, examples, and best practices, see the MCP Proxy Guide.
MockLoop MCP provides native integration with popular AI frameworks:
from langgraph.graph import StateGraph, END
from mockloop_mcp import MockLoopClient
# Initialize MockLoop client
mockloop = MockLoopClient()
def setup_ai_testing(state):
"""AI-driven test setup"""
# Generate mock API with AI analysis
result = mockloop.generate_mock_api(
spec_url_or_path="https://api.example.com/openapi.json",
output_dir_name="ai_test_environment"
)
# Use AI prompts for scenario generation
scenarios = mockloop.analyze_openapi_for_testing(
api_spec=state["api_spec"],
analysis_depth="comprehensive",
include_security_tests=True
)
state["mock_server_url"] = "http://localhost:8000"
state["test_scenarios"] = scenarios
return state
def execute_ai_tests(state):
"""Execute AI-generated test scenarios"""
# Deploy AI-generated scenarios
for scenario in state["test_scenarios"]:
mockloop.deploy_scenario(
server_url=state["mock_server_url"],
scenario_config=scenario
)
# Execute load tests with AI optimization
results = mockloop.run_load_test(
server_url=state["mock_server_url"],
scenario_name=scenario["name"],
duration=300,
concurrent_users=100
)
# AI-powered result analysis
analysis = mockloop.analyze_test_results(
test_results=results,
include_recommendations=True
)
state["test_results"].append(analysis)
return state
# Build AI-native testing workflow
workflow = StateGraph(dict)
workflow.add_node("setup_ai_testing", setup_ai_testing)
workflow.add_node("execute_ai_tests", execute_ai_tests)
workflow.set_entry_point("setup_ai_testing")
workflow.add_edge("setup_ai_testing", "execute_ai_tests")
workflow.add_edge("execute_ai_tests", END)
app = workflow.compile()from crewai import Agent, Task, Crew
from mockloop_mcp import MockLoopClient
# Initialize MockLoop client
mockloop = MockLoopClient()
# AI Testing Specialist Agent
api_testing_agent = Agent(
role='AI API Testing Specialist',
goal='Generate and execute comprehensive AI-driven API tests',
backstory='Expert in AI-native testing with MockLoop MCP integration',
tools=[
mockloop.generate_mock_api,
mockloop.analyze_openapi_for_testing,
mockloop.generate_scenario_config
]
)
# Performance Analysis Agent
performance_agent = Agent(
role='AI Performance Analyst',
goal='Analyze API performance with AI-powered insights',
backstory='Specialist in AI-driven performance analysis and optimization',
tools=[
mockloop.run_load_test,
mockloop.get_performance_metrics,
mockloop.analyze_test_results
]
)
# Security Testing Agent
security_agent = Agent(
role='AI Security Testing Expert',
goal='Conduct AI-driven security testing and vulnerability assessment',
backstory='Expert in AI-powered security testing methodologies',
tools=[
mockloop.generate_security_test_scenarios,
mockloop.run_security_test,
mockloop.compare_test_runs
]
)
# Define AI-driven tasks
ai_setup_task = Task(
description='Generate AI-native mock API with comprehensive testing scenarios',
agent=api_testing_agent,
expected_output='Mock server with AI-generated test scenarios deployed'
)
performance_task = Task(
description='Execute AI-optimized performance testing and analysis',
agent=performance_agent,
expected_output='Comprehensive performance analysis with AI recommendations'
)
security_task = Task(
description='Conduct AI-driven security testing and vulnerability assessment',
agent=security_agent,
expected_output='Security test results with AI-powered threat analysis'
)
# Create AI testing crew
ai_testing_crew = Crew(
agents=[api_testing_agent, performance_agent, security_agent],
tasks=[ai_setup_task, performance_task, security_task],
verbose=True
)
# Execute AI-native testing workflow
results = ai_testing_crew.kickoff()from langchain.agents import Tool, AgentExecutor, create_react_agent
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from mockloop_mcp import MockLoopClient
# Initialize MockLoop client
mockloop = MockLoopClient()
# AI-Native Testing Tools
def ai_generate_mock_api(spec_path: str) -> str:
"""Generate AI-enhanced mock API with intelligent scenarios"""
# Generate mock API
result = mockloop.generate_mock_api(spec_url_or_path=spec_path)
# Use AI to analyze and enhance
analysis = mockloop.analyze_openapi_for_testing(
api_spec=spec_path,
analysis_depth="comprehensive",
include_security_tests=True
)
return f"AI-enhanced mock API generated: {result}\nAI Analysis: {analysis['summary']}"
def ai_execute_testing_workflow(server_url: str) -> str:
"""Execute comprehensive AI-driven testing workflow"""
# Create test session context
session = mockloop.create_test_session_context(
session_name="ai_testing_session",
configuration={"ai_enhanced": True}
)
# Generate and deploy AI scenarios
scenarios = mockloop.generate_scenario_config(
api_spec=server_url,
scenario_types=["load", "error", "security"],
ai_optimization=True
)
results = []
for scenario in scenarios:
# Deploy scenario
mockloop.deploy_scenario(
server_url=server_url,
scenario_config=scenario
)
# Execute tests with AI monitoring
test_result = mockloop.execute_test_plan(
server_url=server_url,
test_plan=scenario["test_plan"],
ai_monitoring=True
)
results.append(test_result)
# AI-powered analysis
analysis = mockloop.analyze_test_results(
test_results=results,
include_recommendations=True,
ai_insights=True
)
return f"AI testing workflow completed: {analysis['summary']}"
# Create LangChain tools
ai_testing_tools = [
Tool(
name="AIGenerateMockAPI",
func=ai_generate_mock_api,
description="Generate AI-enhanced mock API with intelligent testing scenarios"
),
Tool(
name="AIExecuteTestingWorkflow",
func=ai_execute_testing_workflow,
description="Execute comprehensive AI-driven testing workflow with intelligent analysis"
)
]
# Create AI testing agent
llm = ChatOpenAI(temperature=0)
ai_testing_prompt = PromptTemplate.from_template("""
You are an AI-native testing assistant powered by MockLoop MCP.
You have access to revolutionary AI-driven testing capabilities including:
- AI-powered scenario generation
- Intelligent test execution
- Advanced performance analysis
- Security vulnerability assessment
- Stateful workflow management
<
