- Explore MCP Servers
- MCP-Server-Python
Mcp Server Python
What is Mcp Server Python
MCP-Server-Python is an AI-powered server designed to provide intelligent, context-aware conversational capabilities. It integrates multiple large language model (LLM) providers such as OpenAI, Anthropic, and Google Gemini, utilizing FastAPI and Pyppeteer for web browsing functionalities.
Use cases
Use cases include developing chatbots for customer service, creating interactive educational platforms, providing AI-assisted research tools, and enhancing user engagement through personalized conversational experiences.
How to use
To use MCP-Server-Python, clone the repository, set up a virtual environment, install the necessary dependencies, and configure your OpenAI API key. The server can be accessed via its RESTful API for integration with any frontend.
Key features
Key features include a role-based AI advisor system, semantic memory management, real-time streaming responses, integrated web browsing capabilities, dynamic context switching, enhanced markdown formatting, multi-modal context support, advanced role search, and multiple LLM provider support.
Where to use
MCP-Server-Python can be used in various fields such as customer support, virtual assistants, educational tools, and any application requiring intelligent conversational interfaces.
Overview
What is Mcp Server Python
MCP-Server-Python is an AI-powered server designed to provide intelligent, context-aware conversational capabilities. It integrates multiple large language model (LLM) providers such as OpenAI, Anthropic, and Google Gemini, utilizing FastAPI and Pyppeteer for web browsing functionalities.
Use cases
Use cases include developing chatbots for customer service, creating interactive educational platforms, providing AI-assisted research tools, and enhancing user engagement through personalized conversational experiences.
How to use
To use MCP-Server-Python, clone the repository, set up a virtual environment, install the necessary dependencies, and configure your OpenAI API key. The server can be accessed via its RESTful API for integration with any frontend.
Key features
Key features include a role-based AI advisor system, semantic memory management, real-time streaming responses, integrated web browsing capabilities, dynamic context switching, enhanced markdown formatting, multi-modal context support, advanced role search, and multiple LLM provider support.
Where to use
MCP-Server-Python can be used in various fields such as customer support, virtual assistants, educational tools, and any application requiring intelligent conversational interfaces.
Content
MCP (Model Context Protocol) Server: Intelligent Conversational Platform
Overview
MCP (Model Context Protocol) is a sophisticated AI-powered server designed to provide intelligent, context-aware conversational capabilities. This standalone server leverages multiple LLM providers (OpenAI, Anthropic, and Google Gemini), FastAPI, and Pyppeteer for web browsing capabilities to deliver nuanced, contextually relevant responses across various business domains.
Note: This repository contains only the MCP server implementation. While frontend examples are provided in the documentation for illustrative purposes, the actual frontend implementation is not included in this repository. The MCP server is designed to be integrated with any frontend through its RESTful API.
Key Features
- 🤖 Role-based AI advisor system with customizable instructions and tones
- 🧠 Semantic memory management with vector similarity search
- 🌊 Real-time streaming responses for improved user experience
- 🌐 Integrated web browsing capabilities for AI-assisted research
- 🔄 Dynamic context switching based on conversation triggers
- 📝 Enhanced markdown formatting for professional-looking content
- 🖼️ Multi-modal context support for processing images and other media
- 🔍 Advanced role search and filtering by keywords, domains, and tone
- 🔗 Advanced memory features with tagging, sharing, and inheritance
- 🔄 Multiple LLM provider support (OpenAI, Anthropic, Google Gemini)
Technology Stack
- Backend: Python with asyncio
- Web Framework: FastAPI
- AI Models:
- OpenAI GPT-4o-mini
- Anthropic Claude models
- Google Gemini models
- Browser Automation: Pyppeteer (Python port of Puppeteer)
- API Documentation: Swagger UI via FastAPI
Setup and Installation
Prerequisites
- Python 3.11+
- OpenAI API key
- Git (for cloning the repository)
Installation Steps
- Clone the repository
- Create a virtual environment:
python -m venv venv
- Activate the virtual environment:
- Windows:
venv\Scripts\activate
- macOS/Linux:
source venv/bin/activate
- Windows:
- Install dependencies:
pip install -r requirements.txt
- Configure environment variables (see below)
- Run the server:
python -m app.main
- Access the API documentation at
http://localhost:8000/docs
Configuration
Create a .env
file based on .env.example
with the following variables:
# OpenAI Configuration OPENAI_API_KEY=your_openai_api_key OPENAI_MODEL=gpt-4o-mini OPENAI_VISION_MODEL=gpt-4o # Anthropic Configuration (Optional) ANTHROPIC_API_KEY=your_anthropic_api_key ANTHROPIC_MODEL=claude-3-haiku-20240307 # Google Gemini Configuration (Optional) GEMINI_API_KEY=your_gemini_api_key GEMINI_MODEL=gemini-1.5-pro # Default Provider Configuration DEFAULT_PROVIDER=openai # Options: openai, anthropic, gemini EMBEDDING_MODEL=text-embedding-ada-002 # Server Settings PORT=8000 DEBUG=False
Recent Improvements
Contextual Analysis for Specialized Domains
- Implemented domain-specific analysis capabilities for various business areas
- Created specialized analysis templates for finance, marketing, operations, sales, and more
- Added automatic extraction of domain-specific terminology and patterns
- Integrated domain-specific metrics and frameworks into AI responses
- Enhanced system prompts with domain-specific guidance
- Created API endpoints for domain analysis and template management
- Improved relevance and specificity of AI responses for specialized domains
Advanced Memory Features
- Implemented memory tagging and categorization system for better organization
- Created hierarchical memory access control with role-based permissions
- Designed role-based memory inheritance mechanism for knowledge sharing
- Added configurable memory sharing permissions between roles
- Developed semantic search for cross-role memory retrieval
- Implemented memory embedding and similarity scoring for relevance ranking
- Added API endpoints for memory sharing and inheritance management
- Created comprehensive documentation for advanced memory features
Role Search and Filtering
- Implemented advanced search capabilities for finding roles by keywords
- Added domain-based filtering to find roles with specific expertise
- Created tone-based filtering for communication style preferences
- Added API endpoints for role search with combined filtering options
- Implemented domain discovery endpoint to retrieve all unique domains
- Created comprehensive test suite for search and filtering functionality
Multi-Modal Context Support
- Added support for processing images alongside text queries
- Implemented dedicated multi-modal processing service
- Created API endpoints for multi-modal content processing
- Added file upload capabilities for media content
- Integrated with vision-capable models across providers
- Added streaming support for multi-modal responses
Multiple LLM Provider Support
- Implemented support for multiple LLM providers (OpenAI, Anthropic, Google Gemini)
- Created a modular provider architecture with a common interface
- Added provider-specific optimizations for each LLM service
- Implemented provider selection for all API endpoints
Test Script Improvements
- Updated all test scripts to use the API prefix consistently across endpoints
- Enhanced test structure with proper skipping of unimplemented endpoints
- Improved test reliability by using consistent client fixtures
- Added comprehensive test coverage for role management, domain analysis, and memory features
- Implemented proper mocking for various services to isolate tests
- Fixed import errors and KeyError issues in test files
- Created dedicated provider routes for direct provider access
- Added fallback mechanisms for multi-modal content when primary provider lacks capabilities
- Implemented provider discovery endpoint to list available providers
- Updated configuration to support provider-specific settings
Enhanced Formatting
- Added explicit formatting instructions to all prompts
- Standardized markdown formatting across different content types
- Improved client-side rendering of formatted content
- Enhanced CSS styling for better readability
Streaming Functionality
- Implemented real-time streaming of AI responses
- Added visual indicators for streaming state
- Fixed string literal issues in SSE handling
- Improved UI to show different loading states
Comprehensive Test Suite
- Created feature-specific test scripts for all major components
- Implemented tests for context switching functionality
- Added tests for memory features including tagging and retrieval
- Created tests for web browsing capabilities with mocked responses
- Implemented tests for multiple LLM provider integration
- Added tests for domain analysis capabilities
- Created tests for role editing and management features
- Implemented tests for role search and filtering
- Added tests for multimodal content processing
- Created comprehensive test documentation with usage instructions
Development
- Use
requirements.txt
for server dependency management - Update
todo.txt
with new features and improvements after each iteration - Update README.md after each new feature implementation
Documentation
Detailed documentation is available in the docs
directory:
ROUTES.md
: API endpoints and their functionalityMODELS.md
: Data structures and schemasARCHITECTURE.md
: System design and component interactionsSERVICES.md
: Core business logic implementationSERVER_CAPABILITIES.md
: Features and capabilitiesmultimodal_context_support.md
: Multi-modal processing capabilitiesrole_search_filtering.md
: Role search and filtering functionalityrole_editing.md
: Role management and editing featuresCONTEXT_SWITCHING.md
: Dynamic context switching between rolesadvanced_memory_features.md
: Advanced memory features including tagging, sharing, and inheritance
Contributing
Please read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
License
[Specify License]
Contact
[Your Contact Information]