MCP ExplorerExplorer

Mcp Rag Scanner

@myonathanlinkedinon 10 months ago
1 Apache-2.0
FreeCommunity
AI Systems
MCP-RAG-Scanner connects Model Context Protocol (MCP) with RAG systems. It scrapes, embeds, and stores web or file content into a vector database for intelligent retrieval and AI-assisted search.

Overview

What is Mcp Rag Scanner

mcp-rag-scanner is an intelligent system that connects the Model Context Protocol (MCP) with Retrieval-Augmented Generation (RAG) systems. It scrapes, embeds, and stores content from web pages or files into a vector database for intelligent retrieval and AI-assisted search.

Use cases

Use cases include generating responses to user queries based on live data, enhancing customer support with accurate information retrieval, and building knowledge bases that are continuously updated from online resources.

How to use

To use mcp-rag-scanner, provide a list of URLs (HTML or PDF). The system will scrape the content, parse it, generate embeddings using the MCP Server, and store them in the Qdrant Vector Database. Users can then query the system for fact-based responses.

Key features

Key features include dynamic knowledge updates, MCP-based embedding generation, fact-grounded responses, scalability, and metadata preservation. The system allows for real-time updates and efficient communication with LLM servers.

Where to use

mcp-rag-scanner can be used in various fields such as research, education, customer support, and any domain requiring real-time access to factual information from multiple sources.

Content

Intelligent Fact-Grounded RAG System

This project is a future-ready intelligent system that dynamically builds and updates knowledge from live URLs (HTML & PDF) into a vector database, enabling fact-based, real-time responses powered by Retrieval-Augmented Generation (RAG).


🧠 Main Architecture Flow

.NET Core Web API 
    ↓
MCP Client 
    ↓
MCP Server (for embedding generation)
    ↓
Qdrant Vector Database (stores embeddings + metadata)
    ↓
RAG (Retrieval from Qdrant)
    ↓
LLM (Large Language Model generates user response)
    ↓
User

🚀 Key Concepts

  • Dynamic Knowledge Updates
    Scrape new URLs anytime (HTML or PDF) → Parse → Embed → Save into Qdrant without system downtime.

  • MCP-Based Embedding Generation
    Use Model Context Protocol (MCP) clients to communicate with LLM servers for embedding documents efficiently.

  • Fact-Grounded Responses
    Instead of hallucinating answers, the system retrieves actual facts stored in vectors to generate responses.

  • Scalable and Future-Proof
    Modular components (Web API, MCP Client, Qdrant, RAG, LLM) allow swapping or upgrading technologies easily.

  • Metadata Preservation
    Each document vector stores not just embeddings but also critical metadata (e.g., URL, title, source type, scraped timestamp) for better retrieval and traceability.


📚 How It Works

  1. User provides a list of URLs
    (Web pages or PDFs).

  2. Scraper Service
    Downloads and extracts the raw content.

  3. Document Parser Service
    Cleans the content depending on file type (HTML or PDF).

  4. Embedding Generation
    Content is sent to an MCP Server to generate numerical vector representations (embeddings).

  5. Vector Store Service
    Embeddings + metadata are stored into Qdrant Vector DB.

  6. User Query (RAG Flow)

    • User asks a question.
    • The system queries Qdrant to find the most relevant document chunks.
    • Retrieved chunks are passed into the LLM as context.
    • The LLM answers based on real retrieved information — not guesses.

🔮 Why This Matters

  • Traditional LLMs make up (hallucinate) information.
  • Our system retrieves real documents and augments LLMs, ensuring trustworthy, verifiable, and updatable answers.
  • This architecture represents the future of responsible AI: dynamic, modular, factual, and constantly learning.

🛠️ Technologies Used

  • .NET Core Web API
  • Model Context Protocol (MCP)
  • Qdrant Vector Database
  • Large Language Models (LLMs)
  • Scraper (HTML/PDF Parsing)
  • Newtonsoft.Json, HttpClient, MediatR, and more

📈 Future Enhancements (Vision)

  • Support multi-language documents scraping and embedding.
  • Enable real-time ingestion pipelines (streaming URLs).
  • Plug-in different LLM providers via MCP.
  • Auto-refresh documents on schedule to keep vectors always up-to-date.
  • Build a user-friendly dashboard to manage knowledge base easily.

📜 License - Apache License 2.0 (TL;DR)

This project follows the Apache License 2.0, which means:

  • You can use, modify, and distribute the code freely.
  • You must include the original license when distributing.
  • You must include the NOTICE file if one is provided.
  • You can use this in personal & commercial projects.
  • No warranties – use at your own risk! 🚀

How to Use:

  1. Fork or clone this repo
  2. Build your solution based on the architecture
  3. Keep the LICENSE file intact
  4. Add attribution like:

“Built with components from the Intelligent Fact-Grounded RAG System (Apache 2.0)”

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers