MCP ExplorerExplorer

Scrapling Fetch

@cyberchittaon 12 days ago
30 Apache 2
FreeCommunity
Web Services
#scrapling#fetch
Access text content from bot-protected websites. Fetches HTML/markdown from sites with anti-automation measures using Scrapling.

Overview

What is Scrapling Fetch

Scrapling Fetch MCP is a specialized server designed to enable AI assistants to access text content from websites that employ bot detection mechanisms. It serves as a bridge between the content available in web browsers and what AI can retrieve, focusing primarily on documentation and reference materials.

Use cases

It is intended for low-volume retrieval of text and HTML content, particularly documentation, technical articles, and reference materials from websites with bot detection in place. Not suitable for general data harvesting or high-volume scraping tasks.

How to use

To use Scrapling Fetch MCP, install the necessary dependencies and configure your Claude client’s MCP server with the provided JSON configuration. The server offers two primary functions: ‘s-fetch-page’ to retrieve complete pages and ‘s-fetch-pattern’ to extract content that matches specific regex patterns.

Key features

The tool includes functionalities for paginated page retrieval, content extraction with regex search patterns, and various protection levels (basic, stealth, max-stealth) to enhance success rates against bot detection. It adapts its method based on site complexity.

Where to use

This tool is best used for accessing documentation and reference material from websites known for implementing bot detection, particularly in situations where a human-like retrieval process is required without high-volume scraping needs.

Content

Scrapling Fetch MCP

License
PyPI version

An MCP server that helps AI assistants access text content from websites that implement bot detection, bridging the gap between what you can see in your browser and what the AI can access.

Intended Use

This tool is optimized for low-volume retrieval of documentation and reference materials (text/HTML only) from websites that implement bot detection. It has not been designed or tested for general-purpose site scraping or data harvesting.

Note: This project was developed in collaboration with Claude Sonnet 3.7, using LLM Context.

Installation

  1. Requirements:

    • Python 3.10+
    • uv package manager
  2. Install dependencies and the tool:

uv tool install scrapling
scrapling install
uv tool install scrapling-fetch-mcp

Setup with Claude

Add this configuration to your Claude client’s MCP server configuration:

{
  "mcpServers": {
    "Cyber-Chitta": {
      "command": "uvx",
      "args": [
        "scrapling-fetch-mcp"
      ]
    }
  }
}

Available Tools

This package provides two distinct tools:

  1. s-fetch-page: Retrieves complete web pages with pagination support
  2. s-fetch-pattern: Extracts content matching regex patterns with surrounding context

Example Usage

Fetching a Complete Page

Human: Please fetch and summarize the documentation at https://example.com/docs

Claude: I'll help you with that. Let me fetch the documentation.

<mcp:function_calls>
<mcp:invoke name="s-fetch-page">
<mcp:parameter name="url">https://example.com/docs</mcp:parameter>
<mcp:parameter name="mode">basic</mcp:parameter>
</mcp:invoke>
</mcp:function_calls>

Based on the documentation I retrieved, here's a summary...

Extracting Specific Content with Pattern Matching

Human: Please find all mentions of "API keys" on the documentation page.

Claude: I'll search for that specific information.

<mcp:function_calls>
<mcp:invoke name="s-fetch-pattern">
<mcp:parameter name="url">https://example.com/docs</mcp:parameter>
<mcp:parameter name="mode">basic</mcp:parameter>
<mcp:parameter name="search_pattern">API\s+keys?</mcp:parameter>
<mcp:parameter name="context_chars">150</mcp:parameter>
</mcp:invoke>
</mcp:function_calls>

I found several mentions of API keys in the documentation:
...

Functionality Options

  • Protection Levels:

    • basic: Fast retrieval (1-2 seconds) but lower success with heavily protected sites
    • stealth: Balanced protection (3-8 seconds) that works with most sites
    • max-stealth: Maximum protection (10+ seconds) for heavily protected sites
  • Content Targeting Options:

    • s-fetch-page: Retrieve entire pages with pagination support (using start_index and max_length)
    • s-fetch-pattern: Extract specific content using regular expressions (with search_pattern and context_chars)
      • Results include position information for follow-up queries with s-fetch-page

Tips for Best Results

  • Start with basic mode and only escalate to higher protection levels if needed
  • For large documents, use the pagination parameters with s-fetch-page
  • Use s-fetch-pattern when looking for specific information on large pages
  • The AI will automatically adjust its approach based on the site’s protection level

Limitations

  • Designed only for text content: Specifically for documentation, articles, and reference materials
  • Not designed for high-volume scraping or data harvesting
  • May not work with sites requiring authentication
  • Performance varies by site complexity

License

Apache 2

Tools

s-fetch-page
Fetches a complete web page with pagination support. Retrieves content from websites with bot-detection avoidance. For best performance, start with 'basic' mode (fastest), then only escalate to 'stealth' or 'max-stealth' modes if basic mode fails. Content is returned as 'METADATA: {json}\n\n[content]' where metadata includes length information and truncation status.
s-fetch-pattern
Extracts content matching regex patterns from web pages. Retrieves specific content from websites with bot-detection avoidance. For best performance, start with 'basic' mode (fastest), then only escalate to 'stealth' or 'max-stealth' modes if basic mode fails. Returns matched content as 'METADATA: {json}\n\n[content]' where metadata includes match statistics and truncation information. Each matched content chunk is delimited with '॥๛॥' and prefixed with '[Position: start-end]' indicating its byte position in the original document, allowing targeted follow-up requests with s-fetch-page using specific start_index values.

Comments