MCP ExplorerExplorer

Agentic Ai Mcp

@Rommagcomon 14 days ago
1 MIT
FreeCommunity
AI Systems
Agentic AI +MCP + Ollama

Overview

What is Agentic Ai Mcp

agentic_ai_mcp is a chat assistant application that integrates MCP servers with a Large Language Model (LLM) and external tools, enabling users to interact with the LLM for direct answers or tool execution.

Use cases

Use cases include providing automated responses to user queries, executing specific tasks via external tools, and retrieving real-time information such as weather updates.

How to use

To use agentic_ai_mcp, clone the repository, install the required packages, and run the MCP server client. Users can then interact with the assistant by sending messages, which can trigger tool executions or fetch information from the LLM.

Key features

Key features include LLM integration using the pydantic-ai library, support for executing external tools based on user input, structured responses from the LLM, and management of multiple MCP servers for tool execution.

Where to use

agentic_ai_mcp can be used in various fields such as customer support, information retrieval, and any application requiring interactive chat capabilities with LLMs and external tools.

Content

LLM Chat Assistant

This project is a chat assistant application that integrates MCP client (MCP host) with an LLM (Large Language Model) and external tools (MCP Servers). It allows users to interact with the LLM, which can either provide direct answers or call external tools (MCP servers) to process user requests.

Supported modes

  • stdio Mode
  • http Mode

Features

  • LLM Integration: Communicates with an LLM using the pydantic-ai library.
  • Tool Execution: Supports external tools - MCP servers that can be executed based on user input.
  • Structured Responses: Handles structured responses from the LLM, including tool calls and direct answers.
  • Server Management: Manages multiple MCP servers for tool execution.

Requirements

  • Python 3.13 or higher
  • MCP server(s) configured for tool execution
  • LLM used local installed Ollama with qwen3:0.6b - You can change it if needed (https://ollama.com - instruction how to.)
  • .env LLM_API_KEY=your-api-key-here if exteral LLM used

Installation

To execute demo MCP servers from ‘mcp-servers’ folder
1.FatMCP - https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#adding-mcp-to-your-python-project

  1. Clone the repository:

    git clone https://github.com/Rommagcom/agentic_ai_mcp.git
    cd agentic_ai_mcp
    
    pip install -r requirements.txt
    python mcp_host_client.py
    
    
  2. To test http Mode:

    • run file from folder mcp-servers/test_http_server.py - python mcp-servers/test_http_server.py

Interaction Example

  • You: echo test (After this request LLM determine to call neccessery Tool from MCP server)

  • Assistant: The result of the echo test is a text containing the message “This is echo test test”. (Direct answer from MCP server)

  • You: How is the weather in Phuket ?

  • Assistant: The weather in Phuket is currently being retrieved via the API, with the mock response indicating it’s a simulated result.

Add your MCP server

1. Add to mcp-server folder new .py file like in examples echo.py or weater_server.py or test_http_server.py
2. Add section to servers_config.json like where 'echo' is tool name in .py file -> @mcp.tool(description="A simple echo tool", name="echo")
"args": ["mcp-servers/weather_server.py"] full server file path

```Example:
{
	"mcpServers": {
		"echo": {
			"command": "python",
			"args": ["mcp-servers/echo.py"]
		},
		"weather_server": {
			"command": "python",
			"args": ["mcp-servers/weather_server.py"]
		}
	}
}

For http use case add http(s) MCP server url to config as: "url": "http://127.0.0.1:9000/mcp" and server name like "test_http" as in :

```Example:
{
	"mcpServers": {
		"test_http": {
  			"url": "http://127.0.0.1:9000/mcp"
		}
	}
}

Tools

No tools

Comments