MCP ExplorerExplorer

Mcpui

@OmChillureon a year ago
4 MIT
FreeCommunity
AI Systems
MCP Web UI is a user-friendly interface for interacting with LLMs, enabling real-time chat and context management.

Overview

What is Mcpui

MCP Web UI is a web-based user interface that acts as a host within the Model Context Protocol (MCP) architecture, facilitating interactions with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.

Use cases

Use cases include building chatbots, enhancing user interaction with AI, conducting research with multiple LLMs, and developing applications that require real-time language processing.

How to use

To use MCP Web UI, clone the repository, configure your environment by setting up the necessary API keys, and run the application either locally or via Docker. Detailed installation steps are provided in the README.

Key features

Key features include multi-provider LLM integration (Anthropic, OpenAI, Ollama, OpenRouter), an intuitive chat interface, real-time response streaming, dynamic configuration management, advanced context aggregation, persistent chat history, and flexible model selection.

Where to use

MCP Web UI can be used in various fields such as AI research, customer support, content generation, and any application requiring interaction with language models.

Content

MCP Web UI

MCP Web UI is a web-based user interface that serves as a Host within the Model Context Protocol (MCP) architecture. It provides a powerful and user-friendly interface for interacting with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.

🌟 Overview

MCP Web UI is designed to simplify and enhance interactions with AI language models by providing:

  • A unified interface for multiple LLM providers
  • Real-time, streaming chat experiences
  • Flexible configuration and model management
  • Robust context handling using the MCP protocol

Demo Video

YouTube

🚀 Features

  • 🤖 Multi-Provider LLM Integration:
    • Anthropic (Claude models)
    • OpenAI (GPT models)
    • Ollama (local models)
    • OpenRouter (multiple providers)
  • 💬 Intuitive Chat Interface
  • 🔄 Real-time Response Streaming via Server-Sent Events (SSE)
  • 🔧 Dynamic Configuration Management
  • 📊 Advanced Context Aggregation
  • 💾 Persistent Chat History using BoltDB
  • 🎯 Flexible Model Selection

📋 Prerequisites

  • Go 1.23+
  • Docker (optional)
  • API keys for desired LLM providers

🛠 Installation

Quick Start

  1. Clone the repository:

    git clone https://github.com/MegaGrindStone/mcp-web-ui.git
    cd mcp-web-ui
    
  2. Configure your environment:

    mkdir -p $HOME/.config/mcpwebui
    cp config.example.yaml $HOME/.config/mcpwebui/config.yaml
    
  3. Set up API keys:

    export ANTHROPIC_API_KEY=your_anthropic_key
    export OPENAI_API_KEY=your_openai_key
    export OPENROUTER_API_KEY=your_openrouter_key
    

Running the Application

Local Development

go mod download
go run ./cmd/server/main.go

Docker Deployment

docker build -t mcp-web-ui .
docker run -p 8080:8080 \
  -v $HOME/.config/mcpwebui/config.yaml:/app/config.yaml \
  -e ANTHROPIC_API_KEY \
  -e OPENAI_API_KEY \
  -e OPENROUTER_API_KEY \
  mcp-web-ui

🔧 Configuration

The configuration file (config.yaml) provides comprehensive settings for customizing the MCP Web UI. Here’s a detailed breakdown:

Server Configuration

  • port: The port on which the server will run (default: 8080)
  • logLevel: Logging verbosity (options: debug, info, warn, error; default: info)
  • logMode: Log output format (options: json, text; default: text)

Prompt Configuration

  • systemPrompt: Default system prompt for the AI assistant
  • titleGeneratorPrompt: Prompt used to generate chat titles

LLM (Language Model) Configuration

The llm section supports multiple providers with provider-specific configurations:

Common LLM Parameters

  • provider: Choose from: ollama, anthropic, openai, openrouter
  • model: Specific model name (e.g., ‘claude-3-5-sonnet-20241022’)
  • parameters: Fine-tune model behavior:
    • temperature: Randomness of responses (0.0-1.0)
    • topP: Nucleus sampling threshold
    • topK: Number of highest probability tokens to keep
    • frequencyPenalty: Reduce repetition of token sequences
    • presencePenalty: Encourage discussing new topics
    • maxTokens: Maximum response length
    • stop: Sequences to stop generation
    • And more provider-specific parameters

Provider-Specific Configurations

  • Ollama:

  • Anthropic:

    • apiKey: Anthropic API key (can use ANTHROPIC_API_KEY env variable)
    • maxTokens: Maximum token limit
  • OpenAI:

    • apiKey: OpenAI API key (can use OPENAI_API_KEY env variable)
  • OpenRouter:

    • apiKey: OpenRouter API key (can use OPENROUTER_API_KEY env variable)

Title Generator Configuration

The genTitleLLM section allows separate configuration for title generation, defaulting to the main LLM if not specified.

MCP Server Configurations

  • mcpSSEServers: Configure Server-Sent Events (SSE) servers

    • url: SSE server URL
    • maxPayloadSize: Maximum payload size
  • mcpStdIOServers: Configure Standard Input/Output servers

    • command: Command to run server
    • args: Arguments for the server command

Example Configuration Snippet

port: 8080
logLevel: info
systemPrompt: You are a helpful assistant.

llm:
  provider: anthropic
  model: claude-3-5-sonnet-20241022
  parameters:
    temperature: 0.7
    maxTokens: 1000

genTitleLLM:
  provider: openai
  model: gpt-3.5-turbo

🏗 Project Structure

  • cmd/: Application entry point
  • internal/handlers/: Web request handlers
  • internal/models/: Data models
  • internal/services/: LLM provider integrations
  • static/: Static assets (CSS)
  • templates/: HTML templates

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Push and create a Pull Request

📄 License

MIT License

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers