- Explore MCP Servers
- MCP-Health
Mcp Health
What is Mcp Health
MCP-Health is a sophisticated Model Context Protocol (MCP) server that integrates Google’s Gemini LLM with advanced medical AI capabilities, designed for healthcare applications. It provides intelligent medical analysis, diagnosis assistance, and health insights using state-of-the-art machine learning models.
Use cases
Use cases for MCP-Health include providing diagnosis assistance to healthcare professionals, monitoring patient health metrics, offering personalized health recommendations, and facilitating secure communication between patients and doctors.
How to use
To use MCP-Health, set up a virtual environment, install the required dependencies, configure the environment variables, initialize the database, and then start the server. Detailed steps include creating a virtual environment, installing dependencies via pip, and running specific commands to initialize and launch the server.
Key features
Key features of MCP-Health include advanced medical analysis (symptom analysis, medical image processing, evidence-based treatment recommendations), AI integration (Google Gemini LLM, medical-specific NLP models, computer vision), healthcare management (patient and doctor portals, appointment scheduling), and health insights (personalized recommendations, risk factor identification, vital signs monitoring).
Where to use
MCP-Health can be used in various healthcare settings, including hospitals, clinics, telemedicine platforms, and health management systems, where intelligent medical analysis and patient management are required.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Health
MCP-Health is a sophisticated Model Context Protocol (MCP) server that integrates Google’s Gemini LLM with advanced medical AI capabilities, designed for healthcare applications. It provides intelligent medical analysis, diagnosis assistance, and health insights using state-of-the-art machine learning models.
Use cases
Use cases for MCP-Health include providing diagnosis assistance to healthcare professionals, monitoring patient health metrics, offering personalized health recommendations, and facilitating secure communication between patients and doctors.
How to use
To use MCP-Health, set up a virtual environment, install the required dependencies, configure the environment variables, initialize the database, and then start the server. Detailed steps include creating a virtual environment, installing dependencies via pip, and running specific commands to initialize and launch the server.
Key features
Key features of MCP-Health include advanced medical analysis (symptom analysis, medical image processing, evidence-based treatment recommendations), AI integration (Google Gemini LLM, medical-specific NLP models, computer vision), healthcare management (patient and doctor portals, appointment scheduling), and health insights (personalized recommendations, risk factor identification, vital signs monitoring).
Where to use
MCP-Health can be used in various healthcare settings, including hospitals, clinics, telemedicine platforms, and health management systems, where intelligent medical analysis and patient management are required.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Healthcare MCP Server
A sophisticated Model Context Protocol (MCP) server that integrates Google’s Gemini LLM with advanced medical AI capabilities for healthcare applications. This system provides intelligent medical analysis, diagnosis assistance, and health insights using state-of-the-art machine learning models.
Features
-
🏥 Advanced Medical Analysis
- Symptom analysis with multi-model approach
- Medical image processing and classification
- Evidence-based treatment recommendations
- Health metrics monitoring and analysis
-
🤖 AI Integration
- Google Gemini LLM integration
- Medical-specific NLP models
- Computer vision for medical imaging
- Specialized medical embeddings
-
👨⚕️ Healthcare Management
- Patient and doctor portals
- Appointment scheduling
- Medical history tracking
- Secure authentication system
-
📊 Health Insights
- Personalized health recommendations
- Risk factor identification
- Vital signs monitoring
- Trend analysis and reporting
Prerequisites
- Python 3.8+
- pip package manager
- Google Gemini API key
- Virtual environment (recommended)
- Storage Requirements:
- At least 20GB free disk space for AI models
- Models are cached locally at
C:\Users\<username>\.cache\huggingface\hub
- First run will download required models (15-30 minutes)
Setup
-
Create and activate a virtual environment:
python -m venv venv # On Windows: .\venv\Scripts\activate # On Unix/MacOS: source venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment:
- Create a
.env
file in the project root - Add required environment variables:
GOOGLE_API_KEY=your-gemini-api-key SECRET_KEY=your-flask-secret-key
- Create a
Running the Server
-
Initialize the database:
python server.py --init-db
-
Start the server:
python launcher.py
The server will be available at http://localhost:5000
Command-Line Usage
The server provides several command-line options for managing AI models and running the system:
Basic Usage
-
Start the server (with automatic model download):
python launcher.py
-
Start without downloading missing models:
python launcher.py --download-models=false
Model Management
-
View model information and storage usage:
python launcher.py --model-info
Shows:
- Total storage used
- Downloaded models
- Missing required models
-
Clear model cache:
# Clear specific model cache: python launcher.py --manage-models --clear-cache medical_nlp python launcher.py --manage-models --clear-cache vision python launcher.py --manage-models --clear-cache medical_bert # Clear all model caches: python launcher.py --manage-models --clear-cache
-
Force download missing models:
python launcher.py --download-models
Advanced Options
--manage-models
: Enter model management mode--clear-cache MODEL_ID
: Clear cache for specific model--model-info
: Show model storage information--download-models
: Force download of missing models
Examples
-
Check system status without starting:
python launcher.py --model-info
-
Clear all caches and start fresh:
python launcher.py --manage-models --clear-cache python launcher.py --download-models
-
Minimal startup (cloud-only features):
python launcher.py --download-models=false
Documentation
Comprehensive documentation is available in the /docs
directory:
1. Technical Guide
- System architecture and components
- Development setup and guidelines
- Deployment instructions
- Security considerations
- Troubleshooting guide
- Performance monitoring
2. API Reference
- Complete API endpoints documentation
- Authentication methods
- Request/response formats
- Error handling
- Rate limiting
- API versioning
3. User Guide
- Getting started guide
- Patient portal instructions
- Doctor portal guide
- Common features walkthrough
- Security best practices
- Emergency procedures
For inline documentation of the codebase:
- Check docstrings in
server.py
for detailed function and class documentation - Review configuration options in
config/mcp_config.py
- Examine medical resources in
resources/
directory
Docker Deployment
Prerequisites
- Docker and Docker Compose installed on your system
- Google Gemini API key
Using Docker Compose (Recommended)
-
Create a
.env
file with your credentials:GOOGLE_API_KEY=your-gemini-api-key SECRET_KEY=your-flask-secret-key
-
Start the services:
docker-compose up -d
-
View logs:
docker-compose logs -f
-
Stop the services:
docker-compose down
Alternative: Using Docker CLI
docker build -t healthcare-mcp .
Running the Container
-
Create a
.env
file with your credentials:GOOGLE_API_KEY=your-gemini-api-key SECRET_KEY=your-flask-secret-key
-
Run the container:
docker run -d \ --name healthcare-mcp \ -p 5000:5000 \ --env-file .env \ -v healthcare_models:/root/.cache/huggingface/hub \ -v healthcare_data:/app/instance \ healthcare-mcp
This command:
- Maps port 5000 to your host
- Loads environment variables from .env
- Creates persistent volumes for AI models and database
- Runs the container in detached mode
-
View logs:
docker logs -f healthcare-mcp
Storage Volumes
healthcare_models
: Stores downloaded AI models (~20GB)healthcare_data
: Stores SQLite database and application data
Maintenance
- Restart container:
docker restart healthcare-mcp
- Stop container:
docker stop healthcare-mcp
- Remove container:
docker rm healthcare-mcp
- Remove volumes:
docker volume rm healthcare_models healthcare_data
Project Structure
. ├── launcher.py # Application entry point ├── server.py # Main MCP server implementation ├── requirements.txt # Project dependencies │ ├── config/ │ └── mcp_config.py # Configuration settings │ ├── resources/ │ ├── medical_ontology.json # Medical knowledge base │ └── prompt_templates.json # LLM prompt templates │ ├── templates/ # Web interface templates │ ├── base.html │ ├── index.html │ ├── login.html │ ├── register.html │ ├── doctor_dashboard.html │ └── patient_dashboard.html │ └── tools/ # Specialized medical AI tools ├── medical_nlp.py # Natural language processing ├── medical_imaging.py # Image analysis └── health_insights.py # Health metrics analysis
Available Tools
1. Medical Analysis Tools
Symptom Analysis
analyze_symptoms(symptoms: List[str], ctx: Context, patient_data: Optional[dict] = None) -> Dict
- Analyzes symptoms using medical NLP models
- Provides potential diagnoses with confidence levels
- Assesses urgency and recommends immediate actions
Medical Image Analysis
medical_image_analysis(image_path: str, ctx: Context) -> Dict
- Processes medical images using computer vision
- Supports various medical imaging formats
- Provides detailed analysis and classifications
Treatment Suggestions
get_treatment_suggestions(condition: str, patient_history: str, ctx: Context) -> Dict
- Generates evidence-based treatment plans
- Considers patient history and context
- Provides alternative treatment options
2. Health Insight Tools
Health Metrics Analysis
generate_health_insights(patient_data: Dict, ctx: Context) -> Dict
- Analyzes patient health metrics
- Identifies risk factors
- Provides personalized recommendations
3. LLM Integration
Text Generation
generate_text(prompt: str, ctx: Context) -> str
- Integrates with Gemini LLM
- Handles medical context appropriately
- Implements safety filters
Security Features
- Secure authentication system
- Role-based access control
- Password hashing
- Session management
- Environment variable protection
Medical Data Resources
The system uses comprehensive medical resources:
-
Medical Ontology
- Symptom categories
- Disease classifications
- Treatment protocols
- Risk assessment guidelines
-
Prompt Templates
- Diagnosis templates
- Treatment planning
- Patient education
- Health monitoring
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
License
MIT License - See LICENSE file for details
Support
For support and questions, please open an issue in the repository.
Acknowledgments
- Google Gemini AI
- Medical NLP community
- Healthcare AI researchers
- Open-source medical datasets
Version History
- v1.0.0 - Initial release
- v1.1.0 - Added medical imaging support
- v1.2.0 - Enhanced health insights
- v1.3.0 - Improved NLP capabilities
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.