- Explore MCP Servers
- mcp-rag-search-server
Mcp Rag Search Server
What is Mcp Rag Search Server
The mcp-rag-search-server is a custom MCP server that integrates Retrieval-Augmented Generation (RAG) capabilities with multiple search providers, specifically Google’s Gemini 2.0 and Linkup.
Use cases
Use cases include academic research where local documents are queried, web applications needing advanced search functionalities, and AI-driven content generation platforms.
How to use
To use the mcp-rag-search-server, clone the repository, install the required dependencies, set up the necessary environment variables including API keys, add local documents to the data directory, and start the server using the command ‘python server.py’.
Key features
Key features include RAG workflow using local documents, advanced AI-powered search through Google’s Gemini 2.0, traditional web search via Linkup, and integration with FastMCP for efficient server implementation.
Where to use
The mcp-rag-search-server can be utilized in various fields such as information retrieval, AI research, content generation, and any application requiring enhanced search capabilities.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Rag Search Server
The mcp-rag-search-server is a custom MCP server that integrates Retrieval-Augmented Generation (RAG) capabilities with multiple search providers, specifically Google’s Gemini 2.0 and Linkup.
Use cases
Use cases include academic research where local documents are queried, web applications needing advanced search functionalities, and AI-driven content generation platforms.
How to use
To use the mcp-rag-search-server, clone the repository, install the required dependencies, set up the necessary environment variables including API keys, add local documents to the data directory, and start the server using the command ‘python server.py’.
Key features
Key features include RAG workflow using local documents, advanced AI-powered search through Google’s Gemini 2.0, traditional web search via Linkup, and integration with FastMCP for efficient server implementation.
Where to use
The mcp-rag-search-server can be utilized in various fields such as information retrieval, AI research, content generation, and any application requiring enhanced search capabilities.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
MCP Server with RAG and Multi-Search
A custom MCP (Model Calling Protocol) server that provides RAG (Retrieval-Augmented Generation) capabilities using LlamaIndex and multiple web search options via Google’s Gemini 2.0 API and Linkup.
Features
- RAG workflow using local documents
- Multiple web search capabilities:
- Google’s Gemini 2.0 for advanced AI-powered search
- Linkup for traditional web search
- Built with FastMCP
Setup
Prerequisites
- Python 3.8 or higher
- Ollama installed locally with DeepSeek models (or modify to use your preferred model)
- Gemini API key (get one at https://ai.google.dev/)
- Linkup API key (optional)
Installation
-
Clone this repository:
git clone <repository-url> cd own-mcp-server -
Install required dependencies:
pip install -r requirements.txt -
Set up environment variables (create a
.envfile):# Required API keys GEMINI_API_KEY=your_gemini_api_key_here LINKUP_API_KEY=your_linkup_api_key_here # Optional configurations OLLAMA_HOST=http://localhost:11434 -
Add documents to the
datadirectory (will be created automatically if it doesn’t exist)
Running the Server
Start the server with:
python server.py
Usage
The server provides the following tools:
web_search: Uses the best available search method (Gemini 2.0 preferred, fallback to Linkup)gemini_search: Search using Google’s Gemini 2.0 AIlinkup_search: Search using Linkuprag: Query your local documents using RAG
Required Libraries
This project uses:
- llama-index - Core RAG functionality
- ollama - Local LLM integration
- Google Generative AI SDK - Gemini 2.0 integration
- Linkup SDK - Web search capabilities
- FastMCP - MCP server implementation
- Python-dotenv - Environment management
- nest-asyncio - Async support
Troubleshooting
If you encounter issues:
- Make sure Ollama is properly installed and running
- Pull the DeepSeek model:
ollama pull deepseek-r1:1.5b - If you encounter Python 3.13 compatibility issues, consider downgrading to Python 3.11 or 3.10
- Verify your API keys are correct and have the necessary permissions
- For Gemini 2.0 issues, make sure your API key has access to the latest models
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










