freshcrate
Home > MCP Servers > firecrawl-mcp-server

firecrawl-mcp-server

🔥 Official Firecrawl MCP Server - Adds powerful web scraping and search to Cursor, Claude and any other LLM clients.

Description

🔥 Official Firecrawl MCP Server - Adds powerful web scraping and search to Cursor, Claude and any other LLM clients.

README

Firecrawl MCP Server

A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for searching, scraping, and interacting with the web.

Big thanks to @vrknetha, @knacklabs for the initial implementation!

Features

  • Search the web and get full page content
  • Scrape any URL into clean, structured data
  • Interact with pages — click, navigate, and operate
  • Deep research with autonomous agent
  • Cloud browser sessions with agent-browser automation
  • Automatic retries and rate limiting
  • Cloud and self-hosted support
  • SSE support

Play around with our MCP Server on MCP.so's playground or on Klavis AI.

Installation

Running with npx

env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Manual Installation

npm install -g firecrawl-mcp

Running on Cursor

Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide

To configure Firecrawl MCP in Cursor v0.48.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add new global MCP server"
  4. Enter the following code:
    {
      "mcpServers": {
        "firecrawl-mcp": {
          "command": "npx",
          "args": ["-y", "firecrawl-mcp"],
          "env": {
            "FIRECRAWL_API_KEY": "YOUR-API-KEY"
          }
        }
      }
    }

To configure Firecrawl MCP in Cursor v0.45.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
    • Name: "firecrawl-mcp" (or your preferred name)
    • Type: "command"
    • Command: env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp

If you are using Windows and are running into issues, try cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"

Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{
  "mcpServers": {
    "mcp-server-firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Running with Streamable HTTP Local Mode

To run the server using Streamable HTTP locally instead of the default stdio transport:

env HTTP_STREAMABLE_SERVER=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Use the url: http://localhost:3000/mcp

Installing via Smithery (Legacy)

To install Firecrawl for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude

Running on VS Code

For one-click installation, click one of the install buttons below...

Install with NPX in VS CodeInstall with NPX in VS Code InsidersFor manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).

{
  "mcp": {
    "inputs": [
      {
        "type": "promptString",
        "id": "apiKey",
        "description": "Firecrawl API Key",
        "password": true
      }
    ],
    "servers": {
      "firecrawl": {
        "command": "npx",
        "args": ["-y", "firecrawl-mcp"],
        "env": {
          "FIRECRAWL_API_KEY": "${input:apiKey}"
        }
      }
    }
  }
}

Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others:

{
  "inputs": [
    {
      "type": "promptString",
      "id": "apiKey",
      "description": "Firecrawl API Key",
      "password": true
    }
  ],
  "servers": {
    "firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "${input:apiKey}"
      }
    }
  }
}

Environment Variables

Required for Cloud API

  • FIRECRAWL_API_KEY: Your Firecrawl API key
    • Required when using cloud API (default)
    • Optional when using self-hosted instance with FIRECRAWL_API_URL
  • FIRECRAWL_API_URL (Optional): Custom API endpoint for self-hosted instances
    • Example: https://firecrawl.your-domain.com
    • If not provided, the cloud API will be used (requires API key)

Optional Configuration

Retry Configuration
  • FIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)
  • FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)
  • FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)
  • FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
  • FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)
  • FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)

Configuration Examples

For cloud API usage with custom retry and credit monitoring:

# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key

# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5        # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000    # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000       # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3      # More aggressive backoff

# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000    # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500    # Critical at 500 credits

For self-hosted instance:

# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com

# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key  # If your instance requires auth

# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500     # Start with faster retries

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
  "mcpServers": {
    "mcp-server-firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",

        "FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
        "FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
        "FIRECRAWL_RETRY_MAX_DELAY": "30000",
        "FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",

        "FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
        "FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
      }
    }
  }
}

System Configuration

The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:

const CONFIG = {
  retry: {
    maxAttempts: 3, // Number of retry attempts for rate-limited requests
    initialDelay: 1000, // Initial delay before first retry (in milliseconds)
    maxDelay: 10000, // Maximum delay between retries (in milliseconds)
    backoffFactor: 2, // Multiplier for exponential backoff
  },
  credit: {
    warningThreshold: 1000, // Warn when credit usage reaches this level
    criticalThreshold: 100, // Critical alert when credit usage reaches this level
  },
};

These configurations control:

  1. Retry Behavior

    • Automatically retries failed requests due to rate limits
    • Uses exponential backoff to avoid overwhelming the API
    • Example: With default settings, retries will be attempted at:
      • 1st retry: 1 second delay
      • 2nd retry: 2 seconds delay
      • 3rd retry: 4 seconds delay (capped at maxDelay)
  2. Credit Usage Monitoring

    • Tracks API credit consumption for cloud API usage
    • Provides warnings at specified thresholds
    • Helps prevent unexpected service interruption
    • Example: With default settings:
      • Warning at 1000 credits remaining
      • Critical alert at 100 credits remaining

Rate Limiting and Batch Processing

The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:

  • Automatic rate limit handling with exponential backoff
  • Efficient parallel processing for batch operations
  • Smart request queuing and throttling
  • Automatic retries for transient errors

How to Choose a Tool

Use this guide to select the right tool for your task:

  • If you know the exact URL(s) you want:
    • For one: use scrape (with JSON format for structured data)
    • For many: use batch_scrape
  • If you need to discover URLs on a site: use map
  • If you want to search the web for info: use search
  • If you need complex research across multiple unknown sources: use agent
  • If you want to analyze a whole site or section: use crawl (with limits!)
  • If you need interactive browser automation (click, type, navigate): use scrape + interact
  • If you need a raw CDP browser session (advanced): use browser (deprecated)

Quick Reference Table

Tool Best for Returns
scrape Single page content JSON (preferred) or markdown
interact Interact with a scraped page Execution result
batch_scrape Multiple known URLs JSON (preferred) or markdown[]
map Discovering URLs on a site URL[]
crawl Multi-page extraction (with limits) markdown/html[]
search Web search for info results[]
agent Complex multi-source research JSON (structured data)
browser Interactive multi-step automation (deprecated) Session with live browser

Format Selection Guide

When using scrape or batch_scrape, choose the right format:

  • JSON format (recommended for most cases): Use when you need specific data from a page. Define a schema based on what you need to extract. This keeps responses small and avoids context window overflow.
  • Markdown format (use sparingly): Only when you genuinely need the full page content, such as reading an entire article for summarization or analyzing page structure.

Available Tools

1. Scrape Tool (firecrawl_scrape)

Scrape content from a single URL with advanced options.

Best for:

  • Single page content extraction, when you know exactly which page contains the information.

Not recommended for:

  • Extracting content from multiple pages (use batch_scrape for known URLs, or map + batch_scrape to discover URLs first, or crawl for full page content)
  • When you're unsure which page contains the information (use search)

Common mistakes:

  • Using scrape for a list of URLs (use batch_scrape instead).
  • Using markdown format by default (use JSON format to extract only what you need).

Choosing the right format:

  • JSON format (preferred): For most use cases, use JSON format with a schema to extract only the specific data needed. This keeps responses focused and prevents context window overflow.
  • Markdown format: Only when the task genuinely requires full page content (e.g., summarizing an entire article, analyzing page structure).

Prompt Example:

"Get the product details from https://example.com/product."

Usage Example (JSON format - preferred):

{
  "name": "firecrawl_scrape",
  "arguments": {
    "url": "https://example.com/product",
    "formats": [{
      "type": "json",
      "prompt": "Extract the product information",
      "schema": {
        "type": "object",
        "properties": {
          "name": { "type": "string" },
          "price": { "type": "number" },
          "description": { "type": "string" }
        },
        "required": ["name", "price"]
      }
    }]
  }
}

Usage Example (markdown format - when full content needed):

{
  "name": "firecrawl_scrape",
  "arguments": {
    "url": "https://example.com/article",
    "formats": ["markdown"],
    "onlyMainContent": true
  }
}

Usage Example (branding format - extract brand identity):

{
  "name": "firecrawl_scrape",
  "arguments": {
    "url": "https://example.com",
    "formats": ["branding"]
  }
}

Branding format: Extracts comprehensive brand identity (colors, fonts, typography, spacing, logo, UI components) for design analysis or style replication.

Returns:

  • JSON structured data, markdown, branding profile, or other formats as specified.

2. Batch Scrape Tool (firecrawl_batch_scrape)

Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.

Best for:

  • Retrieving content from multiple pages, when you know exactly which pages to scrape.

Not recommended for:

  • Discovering URLs (use map first if you don't know the URLs)
  • Scraping a single page (use scrape)

Common mistakes:

  • Using batch_scrape with too many URLs at once (may hit rate limits or token overflow)

Prompt Example:

"Get the content of these three blog posts: [url1, url2, url3]."

Usage Example:

{
  "name": "firecrawl_batch_scrape",
  "arguments": {
    "urls": ["https://example1.com", "https://example2.com"],
    "options": {
      "formats": ["markdown"],
      "onlyMainContent": true
    }
  }
}

Returns:

  • Response includes operation ID for status checking:
{
  "content": [
    {
      "type": "text",
      "text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
    }
  ],
  "isError": false
}

3. Check Batch Status (firecrawl_check_batch_status)

Check the status of a batch operation.

{
  "name": "firecrawl_check_batch_status",
  "arguments": {
    "id": "batch_1"
  }
}

4. Map Tool (firecrawl_map)

Map a website to discover all indexed URLs on the site.

Best for:

  • Discovering URLs on a website before deciding what to scrape
  • Finding specific sections of a website

Not recommended for:

  • When you already know which specific URL you need (use scrape or batch_scrape)
  • When you need the content of the pages (use scrape after mapping)

Common mistakes:

  • Using crawl to discover URLs instead of map

Prompt Example:

"List all URLs on example.com."

Usage Example:

{
  "name": "firecrawl_map",
  "arguments": {
    "url": "https://example.com"
  }
}

Returns:

  • Array of URLs found on the site

5. Search Tool (firecrawl_search)

Search the web and optionally extract content from search results.

Best for:

  • Finding specific information across multiple websites, when you don't know which website has the information.
  • When you need the most relevant content for a query

Not recommended for:

  • When you already know which website to scrape (use scrape)
  • When you need comprehensive coverage of a single website (use map or crawl)

Common mistakes:

  • Using crawl or map for open-ended questions (use search instead)

Usage Example:

{
  "name": "firecrawl_search",
  "arguments": {
    "query": "latest AI research papers 2023",
    "limit": 5,
    "lang": "en",
    "country": "us",
    "scrapeOptions": {
      "formats": ["markdown"],
      "onlyMainContent": true
    }
  }
}

Returns:

  • Array of search results (with optional scraped content)

Prompt Example:

"Find the latest research papers on AI published in 2023."

6. Crawl Tool (firecrawl_crawl)

Starts an asynchronous crawl job on a website and extract content from all pages.

Best for:

  • Extracting content from multiple related pages, when you need comprehensive coverage.

Not recommended for:

  • Extracting content from a single page (use scrape)
  • When token limits are a concern (use map + batch_scrape)
  • When you need fast results (crawling can be slow)

Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.

Common mistakes:

  • Setting limit or maxDepth too high (causes token overflow)
  • Using crawl for a single page (use scrape instead)

Prompt Example:

"Get all blog posts from the first two levels of example.com/blog."

Usage Example:

{
  "name": "firecrawl_crawl",
  "arguments": {
    "url": "https://example.com/blog/*",
    "maxDepth": 2,
    "limit": 100,
    "allowExternalLinks": false,
    "deduplicateSimilarURLs": true
  }
}

Returns:

  • Response includes operation ID for status checking:
{
  "content": [
    {
      "type": "text",
      "text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."
    }
  ],
  "isError": false
}

7. Check Crawl Status (firecrawl_check_crawl_status)

Check the status of a crawl job.

{
  "name": "firecrawl_check_crawl_status",
  "arguments": {
    "id": "550e8400-e29b-41d4-a716-446655440000"
  }
}

Returns:

  • Response includes the status of the crawl job:

8. Extract Tool (firecrawl_extract)

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

Best for:

  • Extracting specific structured data like prices, names, details.

Not recommended for:

  • When you need the full content of a page (use scrape)
  • When you're not looking for specific structured data

Arguments:

  • urls: Array of URLs to extract information from
  • prompt: Custom prompt for the LLM extraction
  • systemPrompt: System prompt to guide the LLM
  • schema: JSON schema for structured data extraction
  • allowExternalLinks: Allow extraction from external links
  • enableWebSearch: Enable web search for additional context
  • includeSubdomains: Include subdomains in extraction

When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service. Prompt Example:

"Extract the product name, price, and description from these product pages."

Usage Example:

{
  "name": "firecrawl_extract",
  "arguments": {
    "urls": ["https://example.com/page1", "https://example.com/page2"],
    "prompt": "Extract product information including name, price, and description",
    "systemPrompt": "You are a helpful assistant that extracts product information",
    "schema": {
      "type": "object",
      "properties": {
        "name": { "type": "string" },
        "price": { "type": "number" },
        "description": { "type": "string" }
      },
      "required": ["name", "price"]
    },
    "allowExternalLinks": false,
    "enableWebSearch": false,
    "includeSubdomains": false
  }
}

Returns:

  • Extracted structured data as defined by your schema
{
  "content": [
    {
      "type": "text",
      "text": {
        "name": "Example Product",
        "price": 99.99,
        "description": "This is an example product description"
      }
    }
  ],
  "isError": false
}

9. Agent Tool (firecrawl_agent)

Autonomous web research agent. This is a separate AI agent layer that independently browses the internet, searches for information, navigates through pages, and extracts structured data based on your query.

How it works:

The agent performs web searches, follows links, reads pages, and gathers data autonomously. This runs asynchronously - it returns a job ID immediately, and you poll firecrawl_agent_status to check when complete and retrieve results.

Async workflow:

  1. Call firecrawl_agent with your prompt/schema → returns job ID
  2. Do other work while the agent researches (can take minutes for complex queries)
  3. Poll firecrawl_agent_status with the job ID to check progress
  4. When status is "completed", the response includes the extracted data

Best for:

  • Complex research tasks where you don't know the exact URLs
  • Multi-source data gathering
  • Finding information scattered across the web
  • Tasks where you can do other work while waiting for results

Not recommended for:

  • Simple single-page scraping where you know the URL (use scrape with JSON format - faster and cheaper)

Arguments:

  • prompt: Natural language description of the data you want (required, max 10,000 characters)
  • urls: Optional array of URLs to focus the agent on specific pages
  • schema: Optional JSON schema for structured output

Prompt Example:

"Find the founders of Firecrawl and their backgrounds"

Usage Example (start agent, then poll for results):

{
  "name": "firecrawl_agent",
  "arguments": {
    "prompt": "Find the top 5 AI startups founded in 2024 and their funding amounts",
    "schema": {
      "type": "object",
      "properties": {
        "startups": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "name": { "type": "string" },
              "funding": { "type": "string" },
              "founded": { "type": "string" }
            }
          }
        }
      }
    }
  }
}

Then poll with firecrawl_agent_status using the returned job ID.

Usage Example (with URLs - agent focuses on specific pages):

{
  "name": "firecrawl_agent",
  "arguments": {
    "urls": ["https://docs.firecrawl.dev", "https://firecrawl.dev/pricing"],
    "prompt": "Compare the features and pricing information from these pages"
  }
}

Returns:

  • Job ID for status checking. Use firecrawl_agent_status to poll for results.

10. Check Agent Status (firecrawl_agent_status)

Check the status of an agent job and retrieve results when complete. Use this to poll for results after starting an agent.

Polling pattern: Agent research can take minutes for complex queries. Poll this endpoint periodically (e.g., every 10-30 seconds) until status is "completed" or "failed".

{
  "name": "firecrawl_agent_status",
  "arguments": {
    "id": "550e8400-e29b-41d4-a716-446655440000"
  }
}

Possible statuses:

  • processing: Agent is still researching - check back later
  • completed: Research finished - response includes the extracted data
  • failed: An error occurred

11. Browser Create (firecrawl_browser_create) — Deprecated

Deprecated: Prefer firecrawl_scrape + firecrawl_interact instead. Interact lets you scrape a page and then click, fill forms, and navigate without managing sessions manually.

Create a cloud browser session for interactive automation.

Arguments:

  • ttl: Total session lifetime in seconds (30-3600, optional)
  • activityTtl: Idle timeout in seconds (10-3600, optional)
  • streamWebView: Whether to enable live view streaming (optional)
  • profile: Save and reuse browser state across sessions (optional)
    • name: Profile name (sessions with the same name share state)
    • saveChanges: Whether to save changes back to the profile (default: true)

Usage Example:

{
  "name": "firecrawl_browser_create",
  "arguments": {
    "ttl": 600,
    "profile": { "name": "my-profile", "saveChanges": true }
  }
}

Returns:

  • Session ID, CDP URL, and live view URL

12. Browser Execute (firecrawl_browser_execute) — Deprecated

Deprecated: Prefer firecrawl_scrape + firecrawl_interact instead.

Execute code in a browser session. Supports agent-browser commands (bash), Python, or JavaScript.

Recommended: Use bash with agent-browser commands (pre-installed in every sandbox):

{
  "name": "firecrawl_browser_execute",
  "arguments": {
    "sessionId": "session-id-here",
    "code": "agent-browser open https://example.com",
    "language": "bash"
  }
}

Common agent-browser commands:

Command Description
agent-browser open <url> Navigate to URL
agent-browser snapshot Accessibility tree with clickable refs
agent-browser click @e5 Click element by ref from snapshot
agent-browser type @e3 "text" Type into element
agent-browser get title Get page title
agent-browser screenshot Take screenshot
agent-browser --help Full command reference

For Playwright scripting, use Python:

{
  "name": "firecrawl_browser_execute",
  "arguments": {
    "sessionId": "session-id-here",
    "code": "await page.goto('https://example.com')\ntitle = await page.title()\nprint(title)",
    "language": "python"
  }
}

13. Browser List (firecrawl_browser_list) — Deprecated

Deprecated: Prefer firecrawl_scrape + firecrawl_interact instead.

List browser sessions, optionally filtered by status.

{
  "name": "firecrawl_browser_list",
  "arguments": {
    "status": "active"
  }
}

14. Browser Delete (firecrawl_browser_delete) — Deprecated

Release History
VersionChangesUrgencyDate
v3.2.1- Adjusted search to include query operators in the promptLow9/26/2025
v3.2.0Fixed an issue where self host instances where requiring an API_KEY even if a custom base url was set. **Full Changelog**: https://github.com/firecrawl/firecrawl-mcp-server/compare/v3.0.0...v3.2.0Low9/16/2025
v3.0.0## What's Changed * (feat/versioning) Support for v1 and v2 by @nickscamara in https://github.com/firecrawl/firecrawl-mcp-server/pull/90 * fix: wrap safeLogs outside of try-catch and name anon functions for better debugging by @mogery in https://github.com/firecrawl/firecrawl-mcp-server/pull/92 * Fastmcp by @tomkosm in https://github.com/firecrawl/firecrawl-mcp-server/pull/97 ## New Contributors * @mogery made their first contribution in https://github.com/firecrawl/firecrawl-mcp-server/pLow9/11/2025
v2.0.0## What's Changed * Updated to Firecrawl v2 API and added Streamable HTTP **Full Changelog**: https://github.com/firecrawl/firecrawl-mcp-server/compare/v1.12.0...v2.0.0Low8/23/2025
v1.12.0## What's Changed * Fix the path in dockerfile and also smithery by @vrknetha in https://github.com/mendableai/firecrawl-mcp-server/pull/32 * refactor: General improvements by @ftonato in https://github.com/mendableai/firecrawl-mcp-server/pull/33 * Klavis AI Firecrawl hosting. by @xiangkaiz in https://github.com/mendableai/firecrawl-mcp-server/pull/42 * Feat sse+remote by @tomkosm in https://github.com/mendableai/firecrawl-mcp-server/pull/45 * fix: add missing server listen call in SSE modeLow7/3/2025
v1.7.2## What's Changed * (feat/llms-txt) Generate LLMs txt tool by @nickscamara in https://github.com/mendableai/firecrawl-mcp-server/pull/23 * Fix stdio logging issue by @vrknetha in https://github.com/mendableai/firecrawl-mcp-server/pull/24 ## New Contributors * @nickscamara made their first contribution in https://github.com/mendableai/firecrawl-mcp-server/pull/23 **Full Changelog**: https://github.com/mendableai/firecrawl-mcp-server/compare/v1.4.1...v1.7.2Low3/27/2025
v1.4.1- Added Deep Research Tool **Full Changelog**: https://github.com/mendableai/firecrawl-mcp-server/compare/v.1.3.3...v1.4.1Low3/2/2025
v.1.3.3**Introducing the Official Firecrawl MCP Server 📡** Give web scraping abilities to Cursor, Claude Desktop and more. env FIRECRAWL_API_KEY=YOUR_KEY npx -y firecrawl-mcpLow2/21/2025
v1.2.4## Overview This release adds environment variable configuration support and enhances documentation, making the server more flexible and easier to configure. ## 🔧 Configuration Updates ### New Environment Variables - Retry Configuration: - `FIRE_CRAWL_RETRY_MAX_ATTEMPTS` (default: 3) - `FIRE_CRAWL_RETRY_INITIAL_DELAY` (default: 1000ms) - `FIRE_CRAWL_RETRY_MAX_DELAY` (default: 10000ms) - `FIRE_CRAWL_RETRY_BACKOFF_FACTOR` (default: 2) - Credit Monitoring: - `FIRE_CRAWL_CREDILow2/11/2025
v1.2.3## What's Changed ### Improvements - Removed redundant batch configuration to rely on FireCrawl library's built-in functionality - Simplified batch processing logic by leveraging library's native implementation - Optimized parallel processing and rate limiting handling - Reduced code complexity and potential configuration conflicts ### Technical Details - Removed custom `CONFIG.batch` settings: - Removed `maxParallelOperations` configuration - Removed `delayBetweenRequests` confLow2/10/2025
v1.2.2This release focuses on improving type safety and error handling in the FireCrawl MCP server. ### Key Improvements - Fixed type system warnings by properly implementing ExtractParams and ExtractResponse interfaces - Enhanced type safety in extract operations with improved type guards - Refined error messages for better configuration validation - Fixed API response type casting issues ### Technical Details - Resolved unused TypeScript interface warnings - Improved type inference in exLow2/5/2025
v1.2.1# FireCrawl MCP Server v1.2.1 - Binary Configuration Hotfix ## Changes - Fixed binary configuration in package.json to resolve npx execution issues - Simplified package configuration for better CLI tool support - Added proper file inclusions for npm package distribution - Updated package scripts to ensure proper build during installation Low1/6/2025
v1.2.0# FireCrawl MCP Server v1.2.0 - Enhanced Scraping & Search ## Major Features - 🔍 New search tool (`fire_crawl_search`) for web search with content extraction - 🔄 Automatic retries with exponential backoff for rate limits - 📊 Credit usage monitoring for cloud API operations - 🚀 Queue system for batch operations with parallel processing - 🌐 Support for self-hosted FireCrawl instances ## Improvements - Enhanced error handling for HTTP errors including 404s - Improved URL validatioLow1/3/2025
v1.0.2## 🐛 Bug Fixes - Fixed JSON parsing error in MCP communication - Removed plain text console.log that was breaking the protocol ## 🔧 Technical Details - All communication now properly follows MCP JSON format - Server initialization remains unchanged - Improved compatibility with Claude Desktop ## 📝 Notes - This is a critical fix for Claude Desktop integration - No changes to core functionality - Server status is now properly handled through MCP protocolLow12/7/2024
v1.0.1## 🔧 Fixes - Fixed binary path in package.json to correctly point to `dist/src/index.js` - Fixed start script path to match the dist structure - Added startup message for better server status visibility ## 🛠️ Technical Details - Updated `bin` path in package.json from `dist/index.js` to `dist/src/index.js` - Updated `start` script to use correct path - Added "FireCrawl MCP Server running on stdio" startup message ## 📝 Notes - No changes to core functionality - Improves server stLow12/7/2024
v1.0.0 ## Features - FireCrawl MCP Server implementation with advanced web scraping capabilities - Support for single URL and batch scraping operations - Multiple output formats (markdown, HTML, screenshots) - Smart rate limiting and error handling - Content filtering with include/exclude tags ## Tools ### fire_crawl_scrape - Single URL scraping with customizable options - Support for JavaScript-rendered content - Mobile/Desktop viewport options ### fire_crawl_batch - Batch processiLow12/6/2024

Similar Packages

coplay-unity-pluginUnity plugin for Coplayv8.16.1
mcp-arrNo descriptionmain@2026-04-21
Awareness-SDKLocal-first AI agent memory — one command, 13+ IDEs, works offline. Persistent memory for Claude Code, Cursor, Windsurf, OpenClaw. Zero-code interceptors for OpenAI/Anthropic. Python & TypeScript SDKsmain@2026-04-21
slack-mcp-serverSession-based Slack MCP for Claude and MCP clients: local-first workflows, secure-default HTTP.v4.1.0
letsfg-mcpLetsFG MCP Server — 200 airline connectors running locally + enterprise GDS/NDC APIs. Flight search & booking for Claude, Cursor, Windsurf, and any MCP-compatible AI agent.2026.4.51