- Explore MCP Servers
- docs-crawler
Docs Crawler
What is Docs Crawler
Docs Crawler is a Model Context Protocol (MCP) server designed to crawl documentation websites, store the content locally, and enable search functionality through vector embeddings.
Use cases
Use cases include crawling technical documentation for software projects, creating searchable knowledge bases for organizations, and enabling quick access to information from extensive documentation repositories.
How to use
To use docs-crawler, you need to run the crawl-docs-website tool to crawl a documentation site, which will save the content in JSON format and store vector embeddings in a Qdrant database. Then, use the search-docs tool to perform searches on the crawled content using queries.
Key features
Key features include crawling documentation websites up to 2 levels deep, extracting and chunking text content, generating TF-IDF based embeddings, storing data in JSON files and a Qdrant database, and providing both vector and fallback text search capabilities.
Where to use
Docs Crawler can be used in various fields such as software documentation management, knowledge base creation, research, and any domain requiring efficient document retrieval and search functionalities.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Docs Crawler
Docs Crawler is a Model Context Protocol (MCP) server designed to crawl documentation websites, store the content locally, and enable search functionality through vector embeddings.
Use cases
Use cases include crawling technical documentation for software projects, creating searchable knowledge bases for organizations, and enabling quick access to information from extensive documentation repositories.
How to use
To use docs-crawler, you need to run the crawl-docs-website tool to crawl a documentation site, which will save the content in JSON format and store vector embeddings in a Qdrant database. Then, use the search-docs tool to perform searches on the crawled content using queries.
Key features
Key features include crawling documentation websites up to 2 levels deep, extracting and chunking text content, generating TF-IDF based embeddings, storing data in JSON files and a Qdrant database, and providing both vector and fallback text search capabilities.
Where to use
Docs Crawler can be used in various fields such as software documentation management, knowledge base creation, research, and any domain requiring efficient document retrieval and search functionalities.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Docs Crawler MCP Server
This project implements a Model Context Protocol (MCP) server that allows you to crawl documentation websites, store the content locally, and search through it using vector embeddings.
Features
This server provides two main tools accessible via MCP:
-
crawl-docs-website:- Crawls a given documentation website up to a depth of 2 levels using Breadth-First Search (BFS).
- Extracts the main text content from each page.
- Chunks the text into manageable pieces (approx. 6000 characters).
- Generates simple TF-IDF based embeddings for each chunk.
- Stores the raw text chunks as JSON files locally in
./data/<site_slug>and~/crawled-docs/<site_slug>. - Stores the chunks and their corresponding vectors in a local Qdrant vector database collection named
<site_slug>. - Includes an option (
forceRecrawl) to clear existing data before crawling.
-
search-docs:- Takes a base URL (corresponding to a previously crawled site) and one or more search queries.
- Generates an embedding for each query.
- Performs a vector similarity search against the relevant Qdrant collection.
- Returns the top relevant text chunks (with source URL and score) for each query.
- Includes a fallback to simple text search on the stored JSON files if vector search fails or yields no results.
Requirements
- Node.js: Version 18 or later recommended.
- npm: Node Package Manager (usually comes with Node.js).
- Python & pip: Required for the
unstructuredlibrary. Ensure Python and pip are installed and accessible in your system’s PATH. - Unstructured: A Python library used for document parsing. Install it via pip:
pip install unstructured. - Qdrant: A running instance of the Qdrant vector database. The server defaults to connecting to
http://localhost:6333.
Note on PATH: The MCP server process needs to be able to find the unstructured command. If you encounter “‘unstructured’ is not recognized” errors, you may need to manually add the Python Scripts directory (where pip installs executables) to the PATH environment variable within the server’s configuration in your cline_mcp_settings.json or claude_desktop_config.json file, similar to this:
Setup
-
Clone the Repository:
git clone <repository_url> cd docs-tool(Replace
<repository_url>if applicable, otherwise assume you are already in thedocs-tooldirectory) -
Install Dependencies:
npm install -
Run Qdrant:
The easiest way to run Qdrant locally is using Docker:docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrantThis command starts Qdrant and maps its ports to your local machine. It also persists data in a
qdrant_storagedirectory within your project folder. -
Build the Project:
Compile the TypeScript code into JavaScript:npm run buildThis will create a
builddirectory containing the executableindex.jsfile.
MCP Server Configuration
To use this server with an MCP client (like Cline), you need to add its configuration to your MCP settings file. This file is typically located at:
- Cline (VS Code Extension):
c:\Users\<YourUsername>\AppData\Roaming\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json - Claude Desktop App (Windows):
c:\Users\<YourUsername>\AppData\Roaming\Claude\claude_desktop_config.json
Add the following entry to the mcpServers object in the settings file. Make sure to replace <absolute_path_to_docs_tool> with the actual absolute path to this project’s directory on your system.
Example args path on Windows: C:/Users/yazan/Documents/Cline/MCP/docs-tool/build/index.js (Use forward slashes).
After saving the settings file, the MCP client should automatically detect and connect to the docs-crawler server.
Usage
Once the server is configured and running, you can use its tools through your MCP client.
Example 1: Crawl a Website
Use the 'crawl-docs-website' tool from the 'docs-crawler' server with baseUrl 'https://react.dev/learn'
(You can add forceRecrawl: true if needed)
Example 2: Search the Crawled Content
Use the 'search-docs' tool from the 'docs-crawler' server with baseUrl 'https://react.dev/learn' and queries ['what are react hooks?', 'explain useState']
The server will return the search results, including the relevant text chunks, their source URLs, and similarity scores.
Data Storage
- Raw Chunks: Stored as JSON files in
./data/<site_slug>within the project directory and also mirrored inC:/Users/<YourUsername>/crawled-docs/<site_slug>. - Embeddings: Stored in the Qdrant vector database in a collection named
<site_slug>.
Limitations
- Simple Embeddings: Uses a basic TF-IDF approach for embeddings, which might be less accurate than sophisticated models like Sentence Transformers.
- Crawl Depth: Limited to a depth of 2 from the base URL.
- Error Handling: Basic error handling is implemented, but complex crawl scenarios might require more robust handling.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










