MCP ExplorerExplorer

Unichat Ai

@FayazKon a year ago
3 MIT
FreeCommunity
AI Systems
#ai-chat#chatgpt#claude#desktop-app#electron#llm#mcp#mcp-client#ollama#openai
A unified desktop interface for multiple AI models with MCP integration, organized project management, and secure API handling

Overview

What is Unichat Ai

UniChat-AI is a cross-platform desktop application that provides a unified interface for interacting with multiple large language models (LLMs) through a single elegant chat interface. It integrates with the Model Context Protocol (MCP) to enhance AI capabilities with various tools and resources.

Use cases

Use cases for UniChat-AI include managing AI-assisted projects, conducting research with multiple AI models, providing customer support through AI chatbots, and creating content with the help of various language models.

How to use

To use UniChat-AI, download the appropriate version for your operating system from the Releases page. After installation, launch the application, add your API keys in the Settings panel, and create your first project to start interacting with different AI models.

Key features

Key features of UniChat-AI include multi-model support for various AI models, project management for organizing conversations, MCP integration for extended capabilities, continuous context switching, file attachments, secure API management, light/dark mode themes, and local data ownership.

Where to use

UniChat-AI can be used in various fields such as software development, research, customer support, content creation, and any area where interaction with AI models is beneficial.

Content

UniChat AI

UniChat AI Logo

UniChat AI is a cross-platform desktop application that provides a unified interface for interacting with multiple large language models (LLMs) through a single elegant chat interface. With built-in support for Model Context Protocol (MCP), UniChat AI extends the capabilities of AI models with tools, resources, and custom integrations.

Features

  • 🤖 Multi-Model Support: Chat with models from OpenAI, Anthropic, Google Gemini, and local models via Ollama
  • 📂 Project Management: Organize conversations into projects with custom settings and instructions
  • 🔌 MCP Integration: Connect to any Model Context Protocol server to extend AI capabilities
  • 🔄 Continuous Context: Switch models mid-conversation while maintaining context
  • 📎 File Attachments: Attach and reference files in your conversations
  • 🔐 Secure API Management: Securely store your API keys in your system’s credential store
  • 🌓 Light/Dark Modes: Work comfortably day or night with theme support
  • 💾 Data Ownership: All your conversations are stored locally

Installation

Download pre-built binaries

Download the latest release for your platform from the Releases page.

Platform Download
Windows UniChat-AI-Windows.exe
macOS UniChat-AI-macOS.dmg
Linux UniChat-AI-Linux.AppImage

Build from source

# Clone the repository
git clone https://github.com/fayazk/unichat-ai.git
cd unichat-ai

# Install dependencies
npm install

# Run in development mode
npm run dev

# Build for production
npm run build

Getting Started

  1. Launch the application after installation
  2. Add your API keys in the Settings panel
  3. Create your first project by clicking the “+” button in the sidebar
  4. Start chatting with your preferred AI model
  5. Configure MCP servers (optional) to extend AI capabilities

MCP Integration

UniChat AI supports Model Context Protocol (MCP) servers for extending AI capabilities with tools and resources. To configure MCP servers:

  1. Navigate to Settings > MCP Servers
  2. Add a new server configuration:
{
  "name": "filesystem",
  "command": "npx",
  "args": [
    "-y",
    "@modelcontextprotocol/server-filesystem",
    "/path/to/directory"
  ]
}

Learn more about available MCP servers in the MCP documentation.

API Keys

UniChat AI requires API keys to access different LLM providers:

API keys are securely stored in your system’s credential store and never shared.

For Developers

Architecture

UniChat AI uses a modular architecture with these key components:

  1. UI Layer: Electron with React/Vue components
  2. Provider Layer: Adapters for different LLM APIs
  3. MCP Layer: Integration with MCP servers
  4. Storage Layer: Local database and file system interaction

Adding New LLM Providers

To add support for a new LLM provider:

  1. Create a new adapter in src/api/providers/
  2. Implement the ProviderInterface
  3. Register the provider in src/api/providerRegistry.ts

See the Developer Guide for detailed instructions.

Troubleshooting

Common Issues

  • API Connection Issues: Verify your API keys and internet connection
  • MCP Server Not Connecting: Check your MCP server configuration and ensure the correct paths
  • Missing Messages: If conversations disappear, check the application logs for database errors

For more help, see the Troubleshooting Guide or open an issue.

Contributing

Contributions are welcome! Please read our Contributing Guidelines before submitting a pull request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements


Made with ❤️ by Fayaz K

GitHub | Website

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers