- Explore MCP Servers
- mcp-client-any-llm
Mcp Client Any Llm
What is Mcp Client Any Llm
mcp-client-any-llm is a modern web client built with Next.js that enables users to interact with various LLM models through the Model Context Protocol (MCP). It provides a user-friendly interface for chatting with different AI models while preserving conversation context.
Use cases
Use cases for mcp-client-any-llm include creating chatbots for customer service, developing interactive educational platforms, generating content for blogs or social media, and facilitating AI-driven brainstorming sessions.
How to use
To use mcp-client-any-llm, clone the repository, install the necessary dependencies, configure the environment variables for your preferred LLM provider, and start the development server. Finally, access the client through your web browser.
Key features
Key features include support for multiple LLM providers (like OpenAI and Google), a clean chat interface with markdown support, dark/light mode, markdown rendering with syntax highlighting, local conversation history, and real-time streaming responses.
Where to use
mcp-client-any-llm can be used in various fields such as customer support, content creation, educational tools, and any application that requires conversational AI interactions.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Client Any Llm
mcp-client-any-llm is a modern web client built with Next.js that enables users to interact with various LLM models through the Model Context Protocol (MCP). It provides a user-friendly interface for chatting with different AI models while preserving conversation context.
Use cases
Use cases for mcp-client-any-llm include creating chatbots for customer service, developing interactive educational platforms, generating content for blogs or social media, and facilitating AI-driven brainstorming sessions.
How to use
To use mcp-client-any-llm, clone the repository, install the necessary dependencies, configure the environment variables for your preferred LLM provider, and start the development server. Finally, access the client through your web browser.
Key features
Key features include support for multiple LLM providers (like OpenAI and Google), a clean chat interface with markdown support, dark/light mode, markdown rendering with syntax highlighting, local conversation history, and real-time streaming responses.
Where to use
mcp-client-any-llm can be used in various fields such as customer support, content creation, educational tools, and any application that requires conversational AI interactions.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
MCP Client for Any LLM
A modern web client built with Next.js that allows you to interact with various LLM models using the Model Context Protocol (MCP). This client provides a clean and intuitive interface for chatting with different AI models while maintaining conversation context.
Features
- 🤖 Support for multiple LLM providers (OpenAI, Google, etc.)
- 💬 Clean chat interface with markdown support
- 🌙 Dark/Light mode support
- 📝 Markdown rendering with syntax highlighting
- 💾 Local conversation history
- 🔄 Real-time streaming responses
Prerequisites
Before you begin, ensure you have installed:
- Node.js (v18 or higher)
- pnpm (recommended) or npm
Quick Start
- Clone the repository:
git clone <your-repo-url>
cd mcp-client-any-llm
- Install dependencies:
pnpm install
-
Configure environment variables:
Create a.env.localfile in the root directory with the following variables:# Required: OpenAI API Configuration OPENAI_API_KEY=your_openai_api_key OPENAI_API_BASE_URL=https://api.openai.com/v1 # Optional: Custom base URL if using a proxy OPENAI_API_MODEL=gpt-3.5-turbo # Optional: Default model to use # Optional: Google AI Configuration GOOGLE_API_KEY=your_google_api_key GOOGLE_API_MODEL=gemini-pro # Default Google AI model # Optional: Azure OpenAI Configuration AZURE_OPENAI_API_KEY=your_azure_openai_key AZURE_OPENAI_ENDPOINT=your_azure_endpoint AZURE_OPENAI_MODEL=your_azure_model_deployment_name # Optional: Anthropic Configuration ANTHROPIC_API_KEY=your_anthropic_key ANTHROPIC_API_MODEL=claude-2 # Default Anthropic modelNote: Only the OpenAI configuration is required by default. Other providers are optional.
-
Start the development server:
pnpm dev
- Open http://localhost:3000 in your browser to start chatting!
Environment Variables
Required Variables
OPENAI_API_KEY: Your OpenAI API key
Optional Variables
OPENAI_API_BASE_URL: Custom base URL for OpenAI API (useful for proxies)OPENAI_API_MODEL: Default OpenAI model to useGOOGLE_API_KEY: Google AI API keyGOOGLE_API_MODEL: Default Google AI modelAZURE_OPENAI_API_KEY: Azure OpenAI API keyAZURE_OPENAI_ENDPOINT: Azure OpenAI endpoint URLAZURE_OPENAI_MODEL: Azure OpenAI model deployment nameANTHROPIC_API_KEY: Anthropic API keyANTHROPIC_API_MODEL: Default Anthropic model
Technology Stack
- Next.js 15 - React Framework
- Tailwind CSS - Styling
- Radix UI - UI Components
- @modelcontextprotocol/sdk - MCP SDK
- React Markdown - Markdown Rendering
Development
To run the development server:
pnpm dev
For production build:
pnpm build pnpm start
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
[Add your license information here]
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










