MCP ExplorerExplorer

Mcp Rag

@AnuragB7on 9 months ago
3 MIT
FreeCommunity
AI Systems
MCP-RAG

Overview

What is Mcp Rag

MCP-RAG is a system built with the Model Context Protocol (MCP) designed to handle large files (up to 200MB) using intelligent chunking strategies, multi-format document support, and enterprise-grade reliability.

Use cases

Use cases for MCP-RAG include processing legal documents for case analysis, extracting data from research papers, managing large datasets in Excel, and performing semantic searches across multiple documents for information retrieval.

How to use

To use MCP-RAG, simply integrate it into your application where you need to process large documents. Utilize its API to upload files, specify processing options, and retrieve results in various formats.

Key features

Key features include multi-format document support (PDF, DOCX, Excel, CSV), adaptive chunking for large file processing, advanced RAG capabilities like semantic search and cross-document queries, integration with Model Context Protocol for standardized communication, and enterprise-ready features such as custom LLM endpoints and vector database options.

Where to use

MCP-RAG can be used in various fields including data analysis, document management, legal tech, academic research, and any domain requiring efficient processing of large documents.

Content

📚 MCP-RAG

MCP-RAG system built with the Model Context Protocol (MCP) that handles large files (up to 200MB) using intelligent chunking strategies, multi-format document support, and enterprise-grade reliability.

Python 3.11+
License: MIT
MCP

🌟 Features

📄 Multi-Format Document Support

  • PDF: Intelligent page-by-page processing with table detection
  • DOCX: Paragraph and table extraction with formatting preservation
  • Excel: Sheet-aware processing with column context (.xlsx/.xls)
  • CSV: Smart row batching with header preservation
  • PPTX: Support for PPTX
  • IMAGE: Suppport for jpeg , png , webp , gif etc and OCR

🚀 Large File Processing

  • Adaptive chunking: Different strategies based on file size
  • Memory management: Streaming processing for 50MB+ files
  • Progress tracking: Real-time progress indicators
  • Timeout handling: Graceful handling of long-running operations

🧠 Advanced RAG Capabilities

  • Semantic search: Vector similarity with confidence scores
  • Cross-document queries: Search across multiple documents simultaneously
  • Source attribution: Citations with similarity scores
  • Hybrid retrieval: Combine semantic and keyword search

🔌 Model Context Protocol (MCP) Integration

  • Universal tool interface: Standardized AI-to-tool communication
  • Auto-discovery: LangChain agents automatically find and use tools
  • Secure communication: Built-in permission controls
  • Extensible architecture: Easy to add new document processors

🏢 Enterprise Ready

  • Custom LLM endpoints: Support for any OpenAI-compatible API
  • Vector database options: ChromaDB (local) + Milvus (production)
  • Batch processing: Handles API rate limits and batch size constraints
  • Error recovery: Retry logic and graceful degradation

🏗️ Architecture

┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Streamlit │ │ LangChain │ │ MCP Server │
│ Frontend │◄──►│ Agent │◄──►│ (Tools) │
└─────────────────┘ └──────────────────┘ └─────────────────┘

┌────────────────────────┼────────────────────────┐
│ ▼ │
┌───────▼────────┐ ┌─────────────────┐ ┌──────▼──────┐
│ Document │ │ Vector Database │ │ LLM API │
│ Processors │ │ (ChromaDB) │ │ Endpoint │
└────────────────┘ └─────────────────┘ └─────────────┘

🚀 Quick Start

Prerequisites

  • Python 3.11+
  • OpenAI API key or compatible LLM endpoint
  • 8GB+ RAM (for large file processing)

Installation

Clone the repository

git clone https://github.com/yourusername/rag-large-file-processor.git
cd rag-large-file-processor

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

pip install -r requirements.txt

# Create .env file
cat > .env << EOF
OPENAI_API_KEY=your_openai_api_key_here
BASE_URL=https://api.openai.com/v1
MODEL_NAME=gpt-4o
VECTOR_DB_TYPE=chromadb


streamlit run streamlit_app.py

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers