MCP ExplorerExplorer

Mcprag

@rajagopal17on a month ago
1 MIT
FreeCommunity
AI Systems
rag using Ollama as emebddings, gemini as LLM and MCP server for agentic use

Overview

What is Mcprag

mcpRAG is a Retrieval-Augmented Generation (RAG) system that utilizes Ollama for embeddings, Gemini as the Large Language Model (LLM), and MCP server for agentic use.

Use cases

Use cases for mcpRAG include creating intelligent chatbots that provide accurate answers, enhancing search engines with contextual understanding, and developing applications that require dynamic content generation from large text datasets.

How to use

To use mcpRAG, you need to prepare text documents, which are chunked into JSON format containing file name, chunk id, and chunk text. These chunks are converted into embeddings using nomic embeddings, indexed with FAISS, and can be queried to retrieve relevant text for generating responses with the LLM.

Key features

Key features of mcpRAG include the use of open-source embeddings, a vector database (FAISS), and the Gemini LLM for efficient information retrieval and response generation.

Where to use

mcpRAG can be used in various fields such as natural language processing, information retrieval, chatbots, and any application requiring intelligent text generation based on user queries.

Content

mcpRAG

RAG- using opensource embeddings, opensource vector database and Gemini LLM


In this project i have created RAG using txt documents:

Embeddings : ‘nomic embeddings’ are used with ollama locally
LLM : gemini-2.0-flash
Vector Database : FAISS

All the txt files are chunked with file name, chunk id and chunk text in JSON format and stored locally.
Each chunk is converted into embeddings and collected in a list

This embedding list is indexed using FAISS and stored locally.
when query is embedding using nomic embeddings, these embeddings are searched in FAISS index and relevant indices(location of chunk) is retrieved. These indices are passed to JSON file to get the actual text.

THis text is passed to LLM with the query to formulate the answer.

Additional text is appended to the exiting index and queries are run on the updated index by loading the stored index and embedding file.

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers