- Explore MCP Servers
- mcp_connect
Mcp Connect
What is Mcp Connect
mcp_connect is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using stdio transport, providing seamless integration with OpenAI models and dynamic tool management.
Use cases
Use cases include building AI applications that require real-time data processing, integrating various AI models for complex tasks, and managing multiple MCP servers for enhanced functionality.
How to use
To use mcp_connect, install it via PyPI, then run the CLI commands to connect to your desired MCP server. You can utilize various protocols and manage AI models through simple command-line inputs.
Key features
Key features include multi-protocol support (stdio transport, SSE, Docker integration), advanced AI model integration (OpenAI, OpenRouter, Groq, Gemini), intelligent context management, and dynamic tool selection.
Where to use
mcp_connect can be used in fields such as AI development, data science, and software engineering, where integration with multiple AI models and real-time communication is essential.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Connect
mcp_connect is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using stdio transport, providing seamless integration with OpenAI models and dynamic tool management.
Use cases
Use cases include building AI applications that require real-time data processing, integrating various AI models for complex tasks, and managing multiple MCP servers for enhanced functionality.
How to use
To use mcp_connect, install it via PyPI, then run the CLI commands to connect to your desired MCP server. You can utilize various protocols and manage AI models through simple command-line inputs.
Key features
Key features include multi-protocol support (stdio transport, SSE, Docker integration), advanced AI model integration (OpenAI, OpenRouter, Groq, Gemini), intelligent context management, and dynamic tool selection.
Where to use
mcp_connect can be used in fields such as AI development, data science, and software engineering, where integration with multiple AI models and real-time communication is essential.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
🚀 MCPOmni Connect - Universal Gateway to MCP Servers
MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.
✨ Key Features
🔌 Universal Connectivity
- Multi-Protocol Support
- Native support for stdio transport
- Server-Sent Events (SSE) for real-time communication
- Streamable HTTP for efficient data streaming
- Docker container integration
- NPX package execution
- Extensible transport layer for future protocols
- Authentication Support
- OAuth 2.0 authentication flow
- Bearer token authentication
- Custom header support
- Secure credential management
- ReAct Agentic Mode
- Autonomous task execution without human intervention
- Advanced reasoning and decision-making capabilities
- Seamless switching between chat and agentic modes
- Self-guided tool selection and execution
- Complex task decomposition and handling
- Orchestrator Agent Mode
- Advanced planning for complex multi-step tasks
- Intelligent task delegation across multiple MCP servers
- Dynamic agent coordination and communication
- Automated subtask management and execution
🧠 AI-Powered Intelligence
- Unified LLM Integration with LiteLLM
- Single unified interface for all AI providers
- Support for 100+ models across providers including:
- OpenAI (GPT-4, GPT-3.5, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)
- Google (Gemini Pro, Gemini Flash, etc.)
- Groq (Llama, Mixtral, Gemma, etc.)
- DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)
- Azure OpenAI
- OpenRouter (access to 200+ models)
- Ollama (local models)
- Simplified configuration and reduced complexity
- Dynamic system prompts based on available capabilities
- Intelligent context management
- Automatic tool selection and chaining
- Universal model support through custom ReAct Agent
- Handles models without native function calling
- Dynamic function execution based on user requests
- Intelligent tool orchestration
🔒 Security & Privacy
- Explicit User Control
- All tool executions require explicit user approval in chat mode
- Clear explanation of tool actions before execution
- Transparent disclosure of data access and usage
- Data Protection
- Strict data access controls
- Server-specific data isolation
- No unauthorized data exposure
- Privacy-First Approach
- Minimal data collection
- User data remains on specified servers
- No cross-server data sharing without consent
- Secure Communication
- Encrypted transport protocols
- Secure API key management
- Environment variable protection
💾 Memory Management
- Redis-Powered Persistence
- Long-term conversation memory storage
- Session persistence across restarts
- Configurable memory retention
- Easy memory toggle with commands
- Chat History File Storage
- Save complete chat conversations to files
- Load previous conversations from saved files
- Continue conversations from where you left off
- Persistent chat history across sessions
- File-based backup and restoration of conversations
- Intelligent Context Management
- Automatic context pruning
- Relevant information retrieval
- Memory-aware responses
- Cross-session context maintenance
💬 Prompt Management
- Advanced Prompt Handling
- Dynamic prompt discovery across servers
- Flexible argument parsing (JSON and key-value formats)
- Cross-server prompt coordination
- Intelligent prompt validation
- Context-aware prompt execution
- Real-time prompt responses
- Support for complex nested arguments
- Automatic type conversion and validation
- Client-Side Sampling Support
- Dynamic sampling configuration from client
- Flexible LLM response generation
- Customizable sampling parameters
- Real-time sampling adjustments
🛠️ Tool Orchestration
- Dynamic Tool Discovery & Management
- Automatic tool capability detection
- Cross-server tool coordination
- Intelligent tool selection based on context
- Real-time tool availability updates
📦 Resource Management
- Universal Resource Access
- Cross-server resource discovery
- Unified resource addressing
- Automatic resource type detection
- Smart content summarization
🔄 Server Management
- Advanced Server Handling
- Multiple simultaneous server connections
- Automatic server health monitoring
- Graceful connection management
- Dynamic capability updates
- Flexible authentication methods
- Runtime server configuration updates
🏗️ Architecture
Core Components
MCPOmni Connect ├── Transport Layer │ ├── Stdio Transport │ ├── SSE Transport │ └── Docker Integration ├── Session Management │ ├── Multi-Server Orchestration │ └── Connection Lifecycle Management ├── Tool Management │ ├── Dynamic Tool Discovery │ ├── Cross-Server Tool Routing │ └── Tool Execution Engine └── AI Integration ├── LLM Processing ├── Context Management └── Response Generation
🚀 Getting Started
Prerequisites
- Python 3.10+
- LLM API key
- UV package manager (recommended)
- Redis server (optional, for persistent memory)
Install using package manager
# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json
Environment Variables
Variable | Description | Example |
---|---|---|
LLM_API_KEY | Universal API key for LLM provider | sk-… (OpenAI), etc. |
OPENAI_API_KEY | Specific OpenAI API key (optional) | sk-… |
ANTHROPIC_API_KEY | Specific Anthropic API key (optional) | sk-ant-… |
GROQ_API_KEY | Specific Groq API key (optional) | gsk_… |
REDIS_HOST | Redis server hostname (optional) | localhost |
REDIS_PORT | Redis server port (optional) | 6379 |
REDIS_DB | Redis database number (optional) | 0 |
Note: With LiteLLM integration, you can either use LLM_API_KEY
as a universal key or set provider-specific keys. LiteLLM will automatically route to the appropriate provider based on the model name.
Start CLI
# start the cli running the command ensure your api key is exported or create .env
mcpomni_connect
🧪 Testing
Running Tests
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
Test Structure
tests/ ├── unit/ # Unit tests for individual components
Development Quick Start
-
Installation
# Clone the repository git clone https://github.com/Abiorh001/mcp_omni_connect.git cd mcp_omni_connect # Create and activate virtual environment uv venv source .venv/bin/activate # Install dependencies uv sync
-
Configuration
# Set up environment variables echo "LLM_API_KEY=your_key_here" > .env # Configure your servers in servers_config.json
-
** Start Client**
# Start the client uv run run.py # or python run.py
🧑💻 Examples
Basic CLI Example
You can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.
Using uv (recommended):
uv run examples/basic.py
Or using Python directly:
python examples/basic.py
FastAPI Server Example
You can also run MCPOmni Connect as a FastAPI server for web or API-based interaction.
Using uv:
uv run examples/fast_api_iml.py
Or using Python directly:
python examples/fast_api_iml.py
Web Client
A simple web client is provided in examples/index.html
.
- Open it in your browser after starting the FastAPI server.
- It connects to
http://localhost:8000
and provides a chat interface. - The FastAPI server will start on
http://localhost:8000
by default. - You can interact with the API (see
examples/index.html
for a simple web client).
FastAPI API Endpoints
/chat/agent_chat
(POST)
- Description: Send a chat query to the agent and receive a streamed response.
- Request:
{ "query": "Your question here", "chat_id": "unique-chat-id" }
- Response: Streamed JSON lines, each like:
{ "message_id": "...", "usid": "...", "role": "assistant", "content": "Response text", "meta": [], "likeordislike": null, "time": "2024-06-10 12:34:56" }
🛠️ Developer Integration
MCPOmni Connect is not just a CLI tool—it’s also a powerful Python library that you can use to build your own backend services, custom clients, or API servers.
Build Your Own MCP Client
You can import MCPOmni Connect in your Python project to:
- Connect to one or more MCP servers
- Choose between ReAct Agent mode (autonomous tool use) or Orchestrator Agent mode (multi-step, multi-server planning)
- Manage memory, context, and tool orchestration
- Expose your own API endpoints (e.g., with FastAPI, Flask, etc.)
Example: Custom Backend with FastAPI
See examples/fast_api_iml.py
for a full-featured example.
Minimal Example:
from mcpomni_connect.client import Configuration, MCPClient
from mcpomni_connect.llm import LLMConnection
from mcpomni_connect.agents.react_agent import ReactAgent
from mcpomni_connect.agents.orchestrator import OrchestratorAgent
config = Configuration()
client = MCPClient(config)
llm_connection = LLMConnection(config)
# Choose agent mode
agent = ReactAgent(...) # or OrchestratorAgent(...)
# Use in your API endpoint
response = await agent.run(
query="Your user query",
sessions=client.sessions,
llm_connection=llm_connection,
# ...other arguments...
)
FastAPI Integration
You can easily expose your MCP client as an API using FastAPI.
See the FastAPI example for:
- Async server startup and shutdown
- Handling chat requests with different agent modes
- Streaming responses to clients
Key Features for Developers:
- Full control over agent configuration and limits
- Support for both chat and autonomous agentic modes
- Easy integration with any Python web framework
Server Configuration Examples
Basic OpenAI Configuration
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
Anthropic Claude Configuration
{
"LLM": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
Groq Configuration
{
"LLM": {
"provider": "groq",
"model": "llama-3.1-8b-instant",
"temperature": 0.5,
"max_tokens": 2000,
"max_context_length": 8000,
"top_p": 0.9
}
}
Azure OpenAI Configuration
{
"LLM": {
"provider": "azureopenai",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"max_context_length": 100000,
"top_p": 0.95,
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01",
"azure_deployment": "your-deployment-name"
}
}
Ollama Local Model Configuration
{
"LLM": {
"provider": "ollama",
"model": "llama3.1:8b",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 100000,
"top_p": 0.7,
"ollama_host": "http://localhost:11434"
}
}
OpenRouter Configuration
{
"LLM": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
🔐 Authentication Methods
MCPOmni Connect supports multiple authentication methods for secure server connections:
OAuth 2.0 Authentication
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
Bearer Token Authentication
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
Custom Headers
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
🔄 Dynamic Server Configuration
MCPOmni Connect supports dynamic server configuration through commands:
Add New Servers
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
The configuration file can include multiple servers with different authentication methods:
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
Remove Servers
# Remove a server by its name
/remove_server:server_name
🎯 Usage
Interactive Commands
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode/add_servers:<config.json>
- Add one or more servers from a configuration file/remove_server:<server_name>
- Remove a server by its name
Memory and Chat History
# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
Operation Modes
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Mode Differences
-
Chat Mode (Default)
- Requires explicit approval for tool execution
- Interactive conversation style
- Step-by-step task execution
- Detailed explanations of actions
-
Autonomous Mode
- Independent task execution
- Self-guided decision making
- Automatic tool selection and chaining
- Progress updates and final results
- Complex task decomposition
- Error handling and recovery
-
Orchestrator Mode
- Advanced planning for complex multi-step tasks
- Strategic delegation across multiple MCP servers
- Intelligent agent coordination and communication
- Parallel task execution when possible
- Dynamic resource allocation
- Sophisticated workflow management
- Real-time progress monitoring across agents
- Adaptive task prioritization
Prompt Management
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
Advanced Prompt Features
- Argument Validation: Automatic type checking and validation
- Default Values: Smart handling of optional arguments
- Context Awareness: Prompts can access previous conversation context
- Cross-Server Execution: Seamless execution across multiple MCP servers
- Error Handling: Graceful handling of invalid arguments with helpful messages
- Dynamic Help: Detailed usage information for each prompt
AI-Powered Interactions
The client intelligently:
- Chains multiple tools together
- Provides context-aware responses
- Automatically selects appropriate tools
- Handles errors gracefully
- Maintains conversation context
Model Support with LiteLLM
- Unified Model Access
- Single interface for 100+ models across all major providers
- Automatic provider detection and routing
- Consistent API regardless of underlying provider
- Native function calling for compatible models
- ReAct Agent fallback for models without function calling
- Supported Providers
- OpenAI: GPT-4, GPT-3.5, and all model variants
- Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- Google: Gemini Pro, Gemini Flash, PaLM models
- Groq: Ultra-fast inference for Llama, Mixtral, Gemma
- DeepSeek: DeepSeek-V3, DeepSeek-Coder, and specialized models
- Azure OpenAI: Enterprise-grade OpenAI models
- OpenRouter: Access to 200+ models from various providers
- Ollama: Local model execution with privacy
- Advanced Features
- Automatic model capability detection
- Dynamic tool execution based on model features
- Intelligent fallback mechanisms
- Provider-specific optimizations
Token & Usage Management
MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.
View API Usage Stats
Use the /api_stats
command to see your current usage:
/api_stats
This will display:
- Total tokens used
- Total requests made
- Total response tokens
- Number of requests
Set Usage Limits
You can set limits to automatically stop execution when thresholds are reached:
- Total Request Limit:
Set the maximum number of requests allowed in a session. - Total Token Usage Limit:
Set the maximum number of tokens that can be used. - Tool Call Timeout:
Set the maximum time (in seconds) a tool call can take before being terminated. - Max Steps:
Set the maximum number of steps the agent can take before stopping.
You can configure these in your servers_config.json
under the AgentConfig
section:
- When any of these limits are reached, the agent will automatically stop running and notify you.
Example Commands
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
🔧 Advanced Features
Tool Orchestration
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
Resource Analysis
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
Demo
🔍 Troubleshooting
Common Issues and Solutions
-
Connection Issues
Error: Could not connect to MCP server
- Check if the server is running
- Verify server configuration in
servers_config.json
- Ensure network connectivity
- Check server logs for errors
-
API Key Issues
Error: Invalid API key
- Verify API key is correctly set in
.env
- Check if API key has required permissions
- Ensure API key is for correct environment (production/development)
- Verify API key is correctly set in
-
Redis Connection
Error: Could not connect to Redis
- Verify Redis server is running
- Check Redis connection settings in
.env
- Ensure Redis password is correct (if configured)
-
Tool Execution Failures
Error: Tool execution failed
- Check tool availability on connected servers
- Verify tool permissions
- Review tool arguments for correctness
Debug Mode
Enable debug mode for detailed logging:
/debug
For additional support, please:
- Check the Issues page
- Review closed issues for similar problems
- Open a new issue with detailed information if needed
🤝 Contributing
We welcome contributions! See our Contributing Guide for details.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
📬 Contact & Support
- Author: Abiola Adeshina
- Email: [email protected]
- GitHub Issues: Report a bug
Built with ❤️ by the MCPOmni Connect Team
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.