- Explore MCP Servers
- docs-crawler-mcp
Docs Crawler Mcp
What is Docs Crawler Mcp
docs-crawler-mcp is an MCP server designed to crawl documentation websites, convert the content into Markdown format, and store embeddings for intelligent search capabilities using Qdrant.
Use cases
Use cases for docs-crawler-mcp include automating the extraction of documentation from websites, maintaining an updated knowledge base, and enhancing search capabilities within documentation systems.
How to use
To use docs-crawler-mcp, clone the repository, install the dependencies with ‘npm install’, and build the project using ‘npm run build’. Configure the server in your MCP setup and run the server binary to begin crawling and querying documentation.
Key features
Key features include crawling documentation sites using Puppeteer, converting HTML to Markdown with jsdom and turndown, generating embeddings with Transformers, storing and searching embeddings in the Qdrant vector database, and exposing MCP resources for integration.
Where to use
docs-crawler-mcp can be used in various fields such as software development, technical documentation management, and knowledge management systems where up-to-date documentation search is essential.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Docs Crawler Mcp
docs-crawler-mcp is an MCP server designed to crawl documentation websites, convert the content into Markdown format, and store embeddings for intelligent search capabilities using Qdrant.
Use cases
Use cases for docs-crawler-mcp include automating the extraction of documentation from websites, maintaining an updated knowledge base, and enhancing search capabilities within documentation systems.
How to use
To use docs-crawler-mcp, clone the repository, install the dependencies with ‘npm install’, and build the project using ‘npm run build’. Configure the server in your MCP setup and run the server binary to begin crawling and querying documentation.
Key features
Key features include crawling documentation sites using Puppeteer, converting HTML to Markdown with jsdom and turndown, generating embeddings with Transformers, storing and searching embeddings in the Qdrant vector database, and exposing MCP resources for integration.
Where to use
docs-crawler-mcp can be used in various fields such as software development, technical documentation management, and knowledge management systems where up-to-date documentation search is essential.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Quasar Crawler MCP Server
An MCP server that crawls documentation websites, converts them to Markdown, and stores embeddings in Qdrant for intelligent, up-to-date documentation search.
Features
- Crawl documentation sites using Puppeteer
- Convert HTML to Markdown with jsdom and turndown
- Generate embeddings with Transformers
- Store and search embeddings in Qdrant vector database
- Expose MCP resources and tools for integration with other systems
Installation
Clone the repository and install dependencies:
npm install
Build the project:
npm run build
Usage
Configure this server as an MCP server (e.g., in Claude Desktop configuration):
{
"mcpServers": {
"quasar-crawler": {
"command": "/path/to/quasar-crawler/build/index.js"
}
}
}
Run the built server binary, then use the provided MCP tools and resources to crawl and query documentation content.
Development
For active development with auto-rebuild:
npm run watch
Available scripts:
build- Compile the TypeScript sourcewatch- Rebuild on file changesinspector- Launch MCP Inspector for debugging
Debugging
Use the MCP Inspector to debug server communication:
npm run inspector
This will provide a URL to access debugging tools in your browser.
Dependencies
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










