MCP ExplorerExplorer

Mcp Server Firecrawl

@Mspariharon a year ago
2 MIT
FreeCommunity
AI Systems
# Firecrawl MCP Server The Firecrawl MCP server provides powerful web scraping, content search, and website crawling capabilities. It supports mobile device emulation, ad blocking, and structured data extraction. With intelligent search capabilities and advanced crawling features, users can efficiently collect and analyze data from multiple sources.

Overview

What is Mcp Server Firecrawl

mcp-server-firecrawl is a Model Context Protocol (MCP) server designed for web scraping, content searching, site crawling, and data extraction using the Firecrawl API.

Use cases

Use cases include extracting content from web pages, conducting site crawls for SEO audits, generating site maps for better navigation, and extracting structured data for analytics.

How to use

To use mcp-server-firecrawl, install it globally or locally via npm, set your Firecrawl API key, and run the server. You can also integrate it with applications like Claude Desktop App and VSCode Extension.

Key features

Key features include web scraping with customizable options, intelligent content search, advanced site crawling functionality, site mapping capabilities, and structured data extraction from multiple URLs.

Where to use

mcp-server-firecrawl can be used in various fields such as data analysis, market research, SEO optimization, and content management systems.

Content

Firecrawl MCP Server

A Model Context Protocol (MCP) server for web scraping, content searching, site crawling, and data extraction using the Firecrawl API.

Features

  • Web Scraping: Extract content from any webpage with customizable options

    • Mobile device emulation
    • Ad and popup blocking
    • Content filtering
    • Structured data extraction
    • Multiple output formats
  • Content Search: Intelligent search capabilities

    • Multi-language support
    • Location-based results
    • Customizable result limits
    • Structured output formats
  • Site Crawling: Advanced web crawling functionality

    • Depth control
    • Path filtering
    • Rate limiting
    • Progress tracking
    • Sitemap integration
  • Site Mapping: Generate site structure maps

    • Subdomain support
    • Search filtering
    • Link analysis
    • Visual hierarchy
  • Data Extraction: Extract structured data from multiple URLs

    • Schema validation
    • Batch processing
    • Web search enrichment
    • Custom extraction prompts

Installation

# Global installation
npm install -g @modelcontextprotocol/mcp-server-firecrawl

# Local project installation
npm install @modelcontextprotocol/mcp-server-firecrawl

Quick Start

  1. Get your Firecrawl API key from the developer portal

  2. Set your API key:

    Unix/Linux/macOS (bash/zsh):

    export FIRECRAWL_API_KEY=your-api-key
    

    Windows (Command Prompt):

    set FIRECRAWL_API_KEY=your-api-key
    

    Windows (PowerShell):

    $env:FIRECRAWL_API_KEY = "your-api-key"
    

    Alternative: Using .env file (recommended for development):

    # Install dotenv
    npm install dotenv
    
    # Create .env file
    echo "FIRECRAWL_API_KEY=your-api-key" > .env
    

    Then in your code:

    import dotenv from 'dotenv';
    dotenv.config();
    
  3. Run the server:

    mcp-server-firecrawl
    

Integration

Claude Desktop App

Add to your MCP settings:

{
  "firecrawl": {
    "command": "mcp-server-firecrawl",
    "env": {
      "FIRECRAWL_API_KEY": "your-api-key"
    }
  }
}

Claude VSCode Extension

Add to your MCP configuration:

{
  "mcpServers": {
    "firecrawl": {
      "command": "mcp-server-firecrawl",
      "env": {
        "FIRECRAWL_API_KEY": "your-api-key"
      }
    }
  }
}

Usage Examples

Web Scraping

// Basic scraping
{
  name: "scrape_url",
  arguments: {
    url: "https://example.com",
    formats: ["markdown"],
    onlyMainContent: true
  }
}

// Advanced extraction
{
  name: "scrape_url",
  arguments: {
    url: "https://example.com/blog",
    jsonOptions: {
      prompt: "Extract article content",
      schema: {
        title: "string",
        content: "string"
      }
    },
    mobile: true,
    blockAds: true
  }
}

Site Crawling

// Basic crawling
{
  name: "crawl",
  arguments: {
    url: "https://example.com",
    maxDepth: 2,
    limit: 100
  }
}

// Advanced crawling
{
  name: "crawl",
  arguments: {
    url: "https://example.com",
    maxDepth: 3,
    includePaths: ["/blog", "/products"],
    excludePaths: ["/admin"],
    ignoreQueryParameters: true
  }
}

Site Mapping

// Generate site map
{
  name: "map",
  arguments: {
    url: "https://example.com",
    includeSubdomains: true,
    limit: 1000
  }
}

Data Extraction

// Extract structured data
{
  name: "extract",
  arguments: {
    urls: ["https://example.com/product1", "https://example.com/product2"],
    prompt: "Extract product details",
    schema: {
      name: "string",
      price: "number",
      description: "string"
    }
  }
}

Configuration

See configuration guide for detailed setup options.

API Documentation

See API documentation for detailed endpoint specifications.

Development

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

# Start in development mode
npm run dev

Examples

Check the examples directory for more usage examples:

Error Handling

The server implements robust error handling:

  • Rate limiting with exponential backoff
  • Automatic retries
  • Detailed error messages
  • Debug logging

Security

  • API key protection
  • Request validation
  • Domain allowlisting
  • Rate limiting
  • Safe error messages

Contributing

See CONTRIBUTING.md for contribution guidelines.

License

MIT License - see LICENSE for details.

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers