MCP ExplorerExplorer

Mcp Client Postgres

@Notthingham-Miedoon 10 months ago
3 MIT
FreeCommunity
AI Systems
A CLI chatbot integrating MCP for flexible tool support and LLM compatibility.

Overview

What is Mcp Client Postgres

mcp-client-postgres is a command-line interface (CLI) chatbot that integrates the Model Context Protocol (MCP) to provide flexible tool support and compatibility with various large language model (LLM) providers that adhere to OpenAI API standards.

Use cases

Use cases for mcp-client-postgres include creating interactive chatbots for customer service, developing tools for data querying and analysis, and building applications that require flexible integration with multiple APIs.

How to use

To use mcp-client-postgres, install the required dependencies using ‘pip install -r requirements.txt’, set up your environment variables in a .env file, configure the servers in ‘servers_config.json’, and run the client with ‘python main.py’. Interact with the assistant and type ‘quit’ or ‘exit’ to end the session.

Key features

Key features of mcp-client-postgres include automatic tool discovery from configured servers, dynamic inclusion of tools in responses, and compatibility with multiple LLM providers through the OpenAI API.

Where to use

mcp-client-postgres can be used in various fields such as software development, data analysis, and customer support, where integration with different tools and LLMs is beneficial.

Content

MCP Simple Chatbot

This example demonstrates how to integrate the Model Context Protocol (MCP) into a simple CLI chatbot. The implementation showcases MCP’s flexibility by supporting multiple tools through MCP servers and is compatible with any LLM provider that follows OpenAI API standards.

Requirements

  • Python 3.10
  • python-dotenv
  • requests
  • mcp
  • uvicorn

Installation

  1. Install the dependencies:

    pip install -r requirements.txt
    
  2. Set up environment variables:

    Create a .env file in the root directory and add your API key:

    LLM_API_KEY=your_api_key_here
    
  3. Configure servers:

    The servers_config.json follows the same structure as Claude Desktop, allowing for easy integration of multiple servers.
    Here’s an example:

    {
      "mcpServers": {
        "sqlite": {
          "command": "uvx",
          "args": [
            "mcp-server-sqlite",
            "--db-path",
            "./test.db"
          ]
        },
        "puppeteer": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-puppeteer"
          ]
        }
      }
    }

    Environment variables are supported as well. Pass them as you would with the Claude Desktop App.

    Example:

    {
      "mcpServers": {
        "postgres": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-postgres",
            "postgresql://postgres:postgres@localhost:5432/ssd"
          ]
        }
      }
    }

Usage

  1. Run the client:

    python main.py
    
  2. Interact with the assistant:

    The assistant will automatically detect available tools and can respond to queries based on the tools provided by the configured servers.

  3. Exit the session:

    Type quit or exit to end the session.

Architecture

  • Tool Discovery: Tools are automatically discovered from configured servers.
  • System Prompt: Tools are dynamically included in the system prompt, allowing the LLM to understand available capabilities.
  • Server Integration: Supports any MCP-compatible server, tested with various server implementations including Uvicorn and Node.js.

Class Structure

  • Configuration: Manages environment variables and server configurations
  • Server: Handles MCP server initialization, tool discovery, and execution
  • Tool: Represents individual tools with their properties and formatting
  • LLMClient: Manages communication with the LLM provider
  • ChatSession: Orchestrates the interaction between user, LLM, and tools

Logic Flow

  1. Tool Integration:

    • Tools are dynamically discovered from MCP servers
    • Tool descriptions are automatically included in system prompt
    • Tool execution is handled through standardized MCP protocol
  2. Runtime Flow:

    • User input is received
    • Input is sent to LLM with context of available tools
    • LLM response is parsed:
      • If it’s a tool call → execute tool and return result
      • If it’s a direct response → return to user
    • Tool results are sent back to LLM for interpretation
    • Final response is presented to user

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers