MCP ExplorerExplorer

Ejemplo Mcp Client Postgres

@3ottAon a year ago
3 MIT
FreeCommunity
AI Systems
A CLI chatbot integrating MCP for flexible tool support with LLMs.

Overview

What is Ejemplo Mcp Client Postgres

ejemplo-mcp-client-postgres is a simple CLI chatbot that integrates the Model Context Protocol (MCP) with PostgreSQL as a backend. It demonstrates the flexibility of MCP by supporting multiple tools through MCP servers and is compatible with any LLM provider adhering to OpenAI API standards.

Use cases

Use cases include building chatbots for customer service, creating interactive educational assistants, and developing tools for data retrieval and interaction in applications that utilize PostgreSQL databases.

How to use

To use ejemplo-mcp-client-postgres, install the required dependencies using ‘pip install -r requirements.txt’, set up your environment variables in a ‘.env’ file, configure your servers in ‘servers_config.json’, and run the client with ‘python main.py’. Interact with the assistant and type ‘quit’ or ‘exit’ to end the session.

Key features

Key features include automatic tool discovery from configured servers, dynamic inclusion of tools based on server capabilities, and compatibility with various LLM providers. It also supports environment variables for configuration.

Where to use

ejemplo-mcp-client-postgres can be used in various fields such as customer support, educational tools, and any application requiring conversational AI capabilities, particularly where PostgreSQL is used as a database.

Content

MCP Simple Chatbot

This example demonstrates how to integrate the Model Context Protocol (MCP) into a simple CLI chatbot. The implementation showcases MCP’s flexibility by supporting multiple tools through MCP servers and is compatible with any LLM provider that follows OpenAI API standards.

Requirements

  • Python 3.10
  • python-dotenv
  • requests
  • mcp
  • uvicorn

Installation

  1. Install the dependencies:

    pip install -r requirements.txt
    
  2. Set up environment variables:

    Create a .env file in the root directory and add your API key:

    LLM_API_KEY=your_api_key_here
    
  3. Configure servers:

    The servers_config.json follows the same structure as Claude Desktop, allowing for easy integration of multiple servers.
    Here’s an example:

    {
      "mcpServers": {
        "sqlite": {
          "command": "uvx",
          "args": [
            "mcp-server-sqlite",
            "--db-path",
            "./test.db"
          ]
        },
        "puppeteer": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-puppeteer"
          ]
        }
      }
    }

    Environment variables are supported as well. Pass them as you would with the Claude Desktop App.

    Example:

    {
      "mcpServers": {
        "postgres": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-postgres",
            "postgresql://postgres:postgres@localhost:5432/ssd"
          ]
        }
      }
    }

Usage

  1. Run the client:

    python main.py
    
  2. Interact with the assistant:

    The assistant will automatically detect available tools and can respond to queries based on the tools provided by the configured servers.

  3. Exit the session:

    Type quit or exit to end the session.

Architecture

  • Tool Discovery: Tools are automatically discovered from configured servers.
  • System Prompt: Tools are dynamically included in the system prompt, allowing the LLM to understand available capabilities.
  • Server Integration: Supports any MCP-compatible server, tested with various server implementations including Uvicorn and Node.js.

Class Structure

  • Configuration: Manages environment variables and server configurations
  • Server: Handles MCP server initialization, tool discovery, and execution
  • Tool: Represents individual tools with their properties and formatting
  • LLMClient: Manages communication with the LLM provider
  • ChatSession: Orchestrates the interaction between user, LLM, and tools

Logic Flow

  1. Tool Integration:

    • Tools are dynamically discovered from MCP servers
    • Tool descriptions are automatically included in system prompt
    • Tool execution is handled through standardized MCP protocol
  2. Runtime Flow:

    • User input is received
    • Input is sent to LLM with context of available tools
    • LLM response is parsed:
      • If it’s a tool call → execute tool and return result
      • If it’s a direct response → return to user
    • Tool results are sent back to LLM for interpretation
    • Final response is presented to user

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers