MCP ExplorerExplorer

Think Mcp Host

@thinkthinkingon 10 months ago
2 MIT
FreeCommunity
AI Systems
#mcp#mcp-client#mcp-host#mcp-server
Think MCP Host (AI·Zen·Love) - An intelligent agent application based on the Model Context Protocol (MCP) that enables seamless integration of MCP resources, tools, and prompts into conversations, powered by various LLMs, VLMs, and reasoning models.

Overview

What is Think Mcp Host

Think MCP Host (AI·Zen·Love) is an intelligent agent application based on the Model Context Protocol (MCP) that facilitates seamless integration of various MCP resources, tools, and prompts into conversations, powered by multiple large language models (LLMs), vision language models (VLMs), and reasoning models.

Use cases

Use cases include automated customer service chatbots, content generation tools for writers, educational assistants for students, programming help for developers, and visual content analysis applications.

How to use

To use Think MCP Host, users can interact with the application through its rich terminal interface or chat interface. Users can input commands, utilize prompts, and integrate resources dynamically during conversations. The application supports various model types for different tasks, including text generation, image analysis, and logical reasoning.

Key features

Key features include complete MCP implementation, extensive model support (LLMs, VLMs, reasoning models), advanced conversation management (automatic history saving, export options), and a rich terminal interface with markdown rendering, syntax highlighting, and interactive command suggestions.

Where to use

Think MCP Host can be utilized in various fields such as customer support, content creation, education, software development, and any domain requiring intelligent conversational agents and resource integration.

Content

Think MCP Host (AI·Zen·Love)

Version

Think MCP Host (AI·Zen·Love) is a Model Context Protocol (MCP) based intelligent agent application that supports various types of large language models, including standard conversational models (LLM), vision language models (VLM), and reasoning models.

Terminal Interface

Chat Interface

Features

  • Complete MCP (Model Context Protocol) Implementation

    • Full MCP architecture support (Host/Client/Server)
    • Comprehensive MCP resource types support
      • Resources: Dynamic integration of external content
      • Prompts: Template-based system prompts
      • Tools: AI-powered function calls
    • Dynamic MCP command insertion anywhere in conversations
      • Seamless integration of resources into context
      • On-demand prompt template usage
      • Direct tool execution within chat
    • Standalone MCP tool execution support
  • Extensive Model Support

    • LLM (Language Models)
      • Text conversations and content generation
      • Programming and code assistance
      • Document writing and analysis
    • VLM (Vision Language Models)
      • Image understanding and analysis
      • Visual content processing
    • Reasoning Models
      • Complex logical analysis
      • Professional domain reasoning
    • Multiple provider support (DeepSeek, OpenAI, OpenRouter, etc.)
  • Advanced Conversation Management

    • Automatic conversation history saving
    • Manual save options with countdown timer
    • Historical conversation loading
    • Multiple export format support
  • System Features

    • Rich Terminal Interface
      • Beautiful markdown rendering in terminal
      • Syntax highlighting for code blocks
      • Unicode and emoji support
      • Interactive command suggestions
    • Cross-Platform Support
      • Full functionality on Windows, macOS, and Ubuntu
      • Native installation support for each platform
      • Consistent user experience across systems
    • Command-line interface
    • Debug mode support
    • Flexible exit options with save/discard choices

Usage Guide

Running Mode Selection

The program supports two main running modes:

  1. Chat Mode (Default)

    • Used for natural language dialogue
    • Supports multiple LLM models
    • Can use MCP enhancement features
  2. Tool Mode

    • Used for using specific AI tools
    • Directly calls functions provided by MCP server

Detailed Usage Process

  1. Select Running Mode

alt text

  • After program startup, you will be prompted to select running mode
  • Enter 1 to select Chat mode
  • Enter 2 to select Tool mode
  1. Chat Mode Setup Process

    1. Select LLM Model
      alt text

      • System will display available model list
      • Enter corresponding number to select model
      • Supported models include DeepSeek, Silicon Flow, Volcano Engine, etc.
    2. Choose Start Method
      alt text

      • Option 1: Set system prompt, then start new conversation
      • Option 2: Directly start new conversation (default)
      • Option 3: Load historical conversation
    3. System Prompt Setting (if Option 1 was selected)
      alt text

      • Can input custom system prompt
      • Supports using ->mcp command to insert MCP resources
    4. Load Historical Conversation (if Option 3 was selected)
      alt text

      • System will display available historical conversation list
      • Select conversation record to load
  2. Tool Mode Setup Process

    1. Select MCP Client
      alt text

      • System will display available MCP client list
      • Select client to use
    2. Select Tool
      alt text

      • Displays tool list provided by selected client
      • Select specific tool to use
    3. Execute Tool
      alt text
      alt text

      • Provide necessary parameters according to tool requirements
      • View tool execution results
    4. Continue or Exit

      • Choose whether to continue using other tools
      • Can switch back to Chat mode at any time

Basic Chat Mode

  1. Start Conversation
    • Directly input text to converse
    • Use Ctrl+C to exit program

MCP Enhanced Mode

During conversation, you can use the ->mcp command to use MCP’s enhancement features. Steps are as follows:

  1. Activate MCP Command

    • Input ->mcp alone and press Enter in conversation
    • System will guide you through subsequent selections
  2. Select MCP Client
    alt text

    • System will display available MCP client list
    • Select client to use
  3. Select MCP Feature Type
    alt text
    System will prompt you to select one of three types:

    1. Resources
      alt text

      • Input 1 to select
      • Used for selecting and referencing external resources (like images, documents, etc.)
      • Returns format: ->mcp_resources[client_name]:resourceURI
    2. Prompts
      alt text

      • Input 2 to select
      • Used for selecting predefined prompt templates
      • Returns format: ->mcp_prompts[client_name]:prompt_name{parameters}
    3. Tools
      alt text

      • Input 3 to select
      • Used for selecting and using specific AI tools
      • Returns format: ->mcp_tools[client_name]:tool_name{parameters}
  4. Complete Selection
    alt text

    • After selection is complete, system will insert corresponding MCP command in conversation
    • You can continue editing message, or send directly
  5. Select Running Mode

alt text

  • After program startup, you will be prompted to select running mode
  • Enter 1 to select Chat mode
  • Enter 2 to select Tool mode
  1. Chat Mode Setup Process

    1. Select LLM Model
      alt text

      • System will display available model list
      • Enter corresponding number to select model
      • Supported models include DeepSeek, Silicon Flow, Volcano Engine, etc.
    2. Choose Start Method
      alt text

      • Option 1: Set system prompt, then start new conversation
      • Option 2: Directly start new conversation (default)
      • Option 3: Load historical conversation
    3. System Prompt Setting (if Option 1 was selected)
      alt text

      • Can input custom system prompt
      • Supports using ->mcp command to insert MCP resources
    4. Load Historical Conversation (if Option 3 was selected)
      alt text

      • System will display available historical conversation list
      • Select conversation record to load
  2. Tool Mode Setup Process

    1. Select MCP Client
      alt text

      • System will display available MCP client list
      • Select client to use
    2. Select Tool
      alt text

      • Displays tool list provided by selected client
      • Select specific tool to use
    3. Execute Tool
      alt text
      alt text

      • Provide necessary parameters according to tool requirements
      • View tool execution results
    4. Continue or Exit

      • Choose whether to continue using other tools
      • Can switch back to Chat mode at any time

Basic Chat Mode

  1. Start Conversation
    • Directly input text to converse
    • Use Ctrl+C to exit program

MCP Enhanced Mode

During conversation, you can use the ->mcp command to use MCP’s enhancement features. Steps are as follows:

  1. Activate MCP Command

    • Input ->mcp alone and press Enter in conversation
    • System will guide you through subsequent selections
  2. Select MCP Client
    alt text

    • System will display available MCP client list
    • Select client to use
  3. Select MCP Feature Type
    alt text
    System will prompt you to select one of three types:

    1. Resources
      alt text

      • Input 1 to select
      • Used for selecting and referencing external resources (like images, documents, etc.)
      • Returns format: ->mcp_resources[client_name]:resourceURI
    2. Prompts
      alt text

      • Input 2 to select
      • Used for selecting predefined prompt templates
      • Returns format: ->mcp_prompts[client_name]:prompt_name{parameters}
    3. Tools
      alt text

      • Input 3 to select
      • Used for selecting and using specific AI tools
      • Returns format: ->mcp_tools[client_name]:tool_name{parameters}
  4. Complete Selection
    alt text

    • After selection is complete, system will insert corresponding MCP command in conversation
    • You can continue editing message, or send directly

Installation and Running

Development Installation

Before installing from package repositories, you can install the project directly from source for development:

# Clone the repository
git clone https://github.com/thinkthinking/think-mcp-host.git
cd think-mcp-host

# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate  # On Linux/macOS
# or
.venv\Scripts\Activate.ps1  # On Windows with PowerShell

# Install in development mode with pip
pip install -e .
# or with uv (recommended)
uv pip install -e .

Windows

  1. Installation methods
    • Download and double-click AI-Zen-Love.exe
    • Or install and run via command line:
# Install uv using pip
python -m pip install uv

# Clone the project and install
git clone https://github.com/thinkthinking/think-mcp-host.git
cd think-mcp-host
python -m venv .venv
.venv\Scripts\Activate.ps1
uv pip install -e .
  1. Configuration file locations
    • LLM configuration: C:\Users\your-username\.think-llm-client\config\servers_config.json
    • MCP configuration: C:\Users\your-username\.think-mcp-client\config\mcp_config.json
    • History records: C:\Users\your-username\.think-mcp-host\command_history\

macOS

  1. Installation methods
    • Download and double-click AI-Zen-Love.app
    • Or install and run via terminal:
# Install uv
python3 -m pip install uv

# Clone the project and install
git clone https://github.com/thinkthinking/think-mcp-host.git
cd think-mcp-host
python3 -m venv .venv
source .venv/bin/activate
uv pip install -e .
  1. Configuration file locations
    • LLM configuration: /Users/your-username/.think-llm-client/config/servers_config.json
    • MCP configuration: /Users/your-username/.think-mcp-client/config/mcp_config.json
    • History records: /Users/your-username/.think-mcp-host/command_history/

Configuration Details

Model Configuration

The project supports three types of models:

  1. LLM (Language Models)

    • Used for: Text conversations, code writing, document generation
    • Examples: DeepSeek Chat, GPT-4
  2. VLM (Vision Language Models)

    • Used for: Image understanding and analysis
    • Examples: GPT-4-Vision, Qwen-VL-Plus
  3. Reasoning Models

    • Used for: Complex reasoning and professional analysis
    • Examples: DeepSeek Reasoner, DeepSeek-R1

LLM Configuration

The configuration file uses JSON format and needs to be configured according to different model types:

{
  "llm": {
    "providers": {
      "deepseek": {
        "api_key": "<DEEPSEEK_API_KEY>",
        "api_url": "https://api.deepseek.com",
        "model": {
          "deepseek-chat": {
            "max_completion_tokens": 8192
          }
        }
      }
    }
  },
  "vlm": {
    "providers": {
      "openai": {
        "api_key": "<OPENAI_API_KEY>",
        "api_url": "https://api.openai.com/v1",
        "model": {
          "gpt-4-vision": {
            "max_completion_tokens": 4096
          }
        }
      }
    }
  },
  "reasoning": {
    "providers": {
      "deepseek": {
        "api_key": "<DEEPSEEK_API_KEY>",
        "api_url": "https://api.deepseek.com",
        "model": {
          "deepseek-reasoner": {
            "max_completion_tokens": 8192,
            "temperature": 0.6
          }
        }
      }
    }
  }
}

Configuration explanation:

  1. Choose the configuration area according to model type (llm/vlm/reasoning)
  2. Multiple providers can be configured under each type
    Configuration instructions for different providers are as follows:
  1. Each provider needs to configure:
    • api_key: API key
    • api_url: API server address
    • model: Specific model configuration
      • max_completion_tokens: Maximum output length
      • temperature: Temperature parameter (optional)

MCP Server Configuration

MCP (Model Context Protocol) server configuration example:

{
  "mcpServers": {
    "think-mcp": {
      "command": "/opt/homebrew/bin/uv",
      "args": [
        "--directory",
        "/Users/thinkthinking/src_code/nas/think-mcp",
        "run",
        "think-mcp"
      ]
    }
  }
}

MCP Commands

The following MCP command formats can be used in conversations:

  1. Interactive Selection
 ->mcp 

This will start an interactive selection interface, guiding you to choose:

  • MCP client
  • Operation type (Resources/Prompts/Tools)
  • Specific resource/prompt/tool
  • Related parameters (if needed)
  1. Direct Usage
# Use resources
->mcp_resources[client_name]:resource_uri

# Use prompts
->mcp_prompts[client_name]:prompt_name{param1:value1,param2:value2}

# Use tools
->mcp_tools[client_name]:tool_name{param1:value1,param2:value2}

Examples:

# Use prompts
->mcp_prompts[think-mcp]:agent-introduction{agent_name:AI Assistant,agent_description:A friendly AI assistant}

# Use tools
->mcp_tools[think-mcp]:analyze_content{text:This is a test text}

Features

  • Support for multiple MCP commands in the same input
  • Commands can be edited and modified at any time
  • Parameters support flexible key-value pair format
  • Friendly error prompts
### MCP Commands

The following MCP command formats can be used in conversations:

1. Interactive Selection

```bash
 ->mcp 

This will start an interactive selection interface, guiding you to choose:

  • MCP client
  • Operation type (Resources/Prompts/Tools)
  • Specific resource/prompt/tool
  • Related parameters (if needed)
  1. Direct Usage
# Use resources
->mcp_resources[client_name]:resource_uri

# Use prompts
->mcp_prompts[client_name]:prompt_name{param1:value1,param2:value2}

# Use tools
->mcp_tools[client_name]:tool_name{param1:value1,param2:value2}

Examples:

# Use prompts
->mcp_prompts[think-mcp]:agent-introduction{agent_name:AI Assistant,agent_description:A friendly AI assistant}

# Use tools
->mcp_tools[think-mcp]:analyze_content{text:This is a test text}

Features

  • Support for multiple MCP commands in the same input
  • Commands can be edited and modified at any time
  • Parameters support flexible key-value pair format
  • Friendly error prompts

Releasing New Versions

To release a new version, follow these steps:

  1. Update the version number:

    • Update the version field in pyproject.toml
    • Follow Semantic Versioning
  2. Commit changes:

git add pyproject.toml
git commit -m "chore: bump version to x.x.x"
  1. Create version tag:
git tag vx.x.x
git push origin vx.x.x

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers