- Explore MCP Servers
- mcp-openai-gemini-llama-example
Mcp Openai Gemini Llama Example
What is Mcp Openai Gemini Llama Example
mcp-openai-gemini-llama-example is a basic example project that demonstrates how to build an AI agent using the Model Context Protocol (MCP) with open LLMs like Meta Llama 3, OpenAI, or Google Gemini, along with a SQLite database. It serves as an educational tool rather than a production-ready framework.
Use cases
Use cases include building interactive AI agents that can query databases, providing responses based on user prompts, and demonstrating the capabilities of LLMs in real-time applications.
How to use
To use mcp-openai-gemini-llama-example, you need to have Docker installed and running, a Hugging Face account with an access token for the Llama 3 model, and a Google API key for the Gemini model. After cloning the repository and installing the required packages, you can run the agent in interactive mode to interact with the SQLite database.
Key features
Key features include connecting to an MCP server, loading and using tools and resources from the MCP server, converting tools into LLM-compatible function calls, and interacting with LLMs using the openai SDK or google-genai SDK.
Where to use
mcp-openai-gemini-llama-example can be used in educational settings for learning about AI agent development, database interactions, and the integration of different LLMs. It is suitable for developers and researchers interested in exploring AI technologies.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Openai Gemini Llama Example
mcp-openai-gemini-llama-example is a basic example project that demonstrates how to build an AI agent using the Model Context Protocol (MCP) with open LLMs like Meta Llama 3, OpenAI, or Google Gemini, along with a SQLite database. It serves as an educational tool rather than a production-ready framework.
Use cases
Use cases include building interactive AI agents that can query databases, providing responses based on user prompts, and demonstrating the capabilities of LLMs in real-time applications.
How to use
To use mcp-openai-gemini-llama-example, you need to have Docker installed and running, a Hugging Face account with an access token for the Llama 3 model, and a Google API key for the Gemini model. After cloning the repository and installing the required packages, you can run the agent in interactive mode to interact with the SQLite database.
Key features
Key features include connecting to an MCP server, loading and using tools and resources from the MCP server, converting tools into LLM-compatible function calls, and interacting with LLMs using the openai SDK or google-genai SDK.
Where to use
mcp-openai-gemini-llama-example can be used in educational settings for learning about AI agent development, database interactions, and the integration of different LLMs. It is suitable for developers and researchers interested in exploring AI technologies.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
How to use Anthropic MCP Server with open LLMs, OpenAI or Google Gemini
This repository contains a basic example of how to build an AI agent using the Model Context Protocol (MCP) with an open LLM (Meta Llama 3), OpenAI or Google Gemini, and a SQLite database. It’s designed to be a simple, educational demonstration, not a production-ready framework.
OpenAI example: https://github.com/jalr4ever/Tiny-OAI-MCP-Agent
Setup
This code sets up a simple CLI agent that can interact with a SQLite database through an MCP server. It uses the official SQLite MCP server and demonstrates:
- Connecting to an MCP server
- Loading and using tools and resources from the MCP server
- Converting tools into LLM-compatible function calls
- Interacting with an LLM using the
openai
SDK orgoogle-genai
SDK.
How to use it
- Docker installed and running.
- Hugging Face account and an access token (for using the Llama 3 model).
- Google API key (for using the Gemini model).
Installation
-
Clone the repository:
git clone https://github.com/philschmid/mcp-openai-gemini-llama-example cd mcp-openai-gemini-llama-example
-
Install the required packages:
pip install -r requirements.txt
-
Log in to Hugging Face
huggingface-cli login --token YOUR_TOKEN
Examples
Llama 3
Run the following command
python sqlite_llama_mcp_agent.py
The agent will start in interactive mode. You can type in prompts to interact with the database. Type “quit”, “exit” or “q” to stop the agent.
Example conversation:
Enter your prompt (or 'quit' to exit): what tables are available? Response: The available tables are: albums, artists, customers, employees, genres, invoice_items, invoices, media_types, playlists, playlist_track, tracks Enter your prompt (or 'quit' to exit): how many artists are there Response: There are 275 artists in the database.
Gemini
Run the following command
GOOGLE_API_KEY=YOUR_API_KEY python sqlite_gemini_mcp_agent.py
Future plans
I’m working on a toolkit to make implementing AI agents using MCP easier. Stay tuned for updates!
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.