- Explore MCP Servers
- gcp-mcp-server
Gcp Mcp Server
What is Gcp Mcp Server
gcp-mcp-server is a comprehensive Model Context Protocol (MCP) server designed for integrating Google Cloud Platform (GCP) APIs with Generative AI applications, enabling seamless interaction between cloud services and AI models.
Use cases
Use cases include deploying AI models using Vertex AI, managing cloud resources through Cloud Functions, automating data processing with BigQuery, and implementing serverless applications with Cloud Run.
How to use
To use gcp-mcp-server, install it via pip with the command ‘pip install gcp-mcp-server’ or clone the repository and install from source. Configure your GCP credentials in a ‘.env’ file or set environment variables as specified in the README.
Key features
Key features include comprehensive GCP service coverage (Compute Engine, Cloud Storage, Cloud Functions, BigQuery, etc.), multiple authentication methods (Service Account JSON keys, OAuth 2.0), enterprise features (multi-project support, cross-region operations), and detailed logging and monitoring capabilities.
Where to use
gcp-mcp-server can be used in various fields such as cloud computing, data analytics, machine learning, and application development, particularly where integration of GCP services with AI applications is required.
Overview
What is Gcp Mcp Server
gcp-mcp-server is a comprehensive Model Context Protocol (MCP) server designed for integrating Google Cloud Platform (GCP) APIs with Generative AI applications, enabling seamless interaction between cloud services and AI models.
Use cases
Use cases include deploying AI models using Vertex AI, managing cloud resources through Cloud Functions, automating data processing with BigQuery, and implementing serverless applications with Cloud Run.
How to use
To use gcp-mcp-server, install it via pip with the command ‘pip install gcp-mcp-server’ or clone the repository and install from source. Configure your GCP credentials in a ‘.env’ file or set environment variables as specified in the README.
Key features
Key features include comprehensive GCP service coverage (Compute Engine, Cloud Storage, Cloud Functions, BigQuery, etc.), multiple authentication methods (Service Account JSON keys, OAuth 2.0), enterprise features (multi-project support, cross-region operations), and detailed logging and monitoring capabilities.
Where to use
gcp-mcp-server can be used in various fields such as cloud computing, data analytics, machine learning, and application development, particularly where integration of GCP services with AI applications is required.
Content
GCP MCP Server
A comprehensive Model Context Protocol (MCP) server for integrating Google Cloud Platform (GCP) APIs with GenAI applications.
Features
-
Comprehensive GCP Service Coverage:
- Compute Engine: VM instances, instance groups, autoscaling
- Cloud Storage: Buckets, objects, signed URLs, lifecycle policies
- Cloud Functions: Deploy, invoke, manage serverless functions
- BigQuery: Datasets, tables, queries, data warehouse operations
- Cloud SQL: Database instances, backups, replicas
- Kubernetes Engine (GKE): Cluster management, workload deployment
- Cloud Run: Serverless container deployment
- Pub/Sub: Topics, subscriptions, message publishing
- Cloud IAM: Service accounts, roles, policies
- Cloud Build: CI/CD pipelines, build triggers
- Vertex AI: ML model deployment and management
- Cloud Logging & Monitoring: Metrics, alerts, log analysis
-
Authentication Methods:
- Service Account JSON keys
- Application Default Credentials (ADC)
- OAuth 2.0 for user authentication
- Workload Identity Federation
- Impersonation support
-
Enterprise Features:
- Multi-project support
- Cross-region operations
- VPC-SC (Service Controls) support
- Cost management and billing alerts
- Compliance and security scanning
- Resource labeling and organization
Installation
pip install gcp-mcp-server
Or install from source:
git clone https://github.com/asklokesh/gcp-mcp-server.git
cd gcp-mcp-server
pip install -e .
Configuration
Create a .env
file or set environment variables:
# GCP Credentials GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json GCP_PROJECT_ID=your-project-id GCP_REGION=us-central1 GCP_ZONE=us-central1-a # OR use Application Default Credentials GOOGLE_CLOUD_PROJECT=your-project-id # Optional Settings GCP_QUOTA_PROJECT_ID=quota-project-id GCP_IMPERSONATE_SERVICE_ACCOUNT=service-account@project.iam.gserviceaccount.com GCP_TIMEOUT=30 GCP_MAX_RETRIES=3 # Multi-Project Support GCP_PROD_PROJECT_ID=prod-project-id GCP_PROD_CREDENTIALS=/path/to/prod-key.json GCP_DEV_PROJECT_ID=dev-project-id GCP_DEV_CREDENTIALS=/path/to/dev-key.json
Quick Start
Basic Usage
from gcp_mcp import GCPMCPServer
# Initialize the server
server = GCPMCPServer()
# Start the server
server.start()
Claude Desktop Configuration
Add to your Claude Desktop config:
{
"mcpServers": {
"gcp": {
"command": "python",
"args": [
"-m",
"gcp_mcp.server"
],
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "/path/to/service-account-key.json",
"GCP_PROJECT_ID": "your-project-id",
"GCP_REGION": "us-central1"
}
}
}
}
Available Tools
Compute Engine Operations
List Instances
{
"tool": "gcp_compute_list_instances",
"arguments": {
"project": "my-project",
"zone": "us-central1-a",
"filter": "status=RUNNING"
}
}
Create Instance
{
"tool": "gcp_compute_create_instance",
"arguments": {
"project": "my-project",
"zone": "us-central1-a",
"name": "my-instance",
"machine_type": "e2-medium",
"source_image": "projects/debian-cloud/global/images/family/debian-11",
"disk_size_gb": 20,
"network_tags": ["web-server"],
"metadata": {
"startup-script": "#!/bin/bash\napt-get update\napt-get install -y nginx"
}
}
}
Cloud Storage Operations
Create Bucket
{
"tool": "gcp_storage_create_bucket",
"arguments": {
"project": "my-project",
"bucket_name": "my-unique-bucket",
"location": "US",
"storage_class": "STANDARD",
"lifecycle_rules": [
{
"action": {"type": "Delete"},
"condition": {"age": 365}
}
]
}
}
Upload Object
{
"tool": "gcp_storage_upload_object",
"arguments": {
"bucket": "my-bucket",
"object_name": "data/file.txt",
"content": "File content here",
"content_type": "text/plain",
"metadata": {"key": "value"}
}
}
BigQuery Operations
Create Dataset
{
"tool": "gcp_bigquery_create_dataset",
"arguments": {
"project": "my-project",
"dataset_id": "my_dataset",
"location": "US",
"description": "My dataset description"
}
}
Execute Query
{
"tool": "gcp_bigquery_query",
"arguments": {
"project": "my-project",
"query": "SELECT * FROM `project.dataset.table` WHERE date = CURRENT_DATE()",
"use_legacy_sql": false,
"maximum_bytes_billed": 1000000000
}
}
Cloud Functions Operations
Deploy Function
{
"tool": "gcp_functions_deploy",
"arguments": {
"project": "my-project",
"region": "us-central1",
"name": "my-function",
"runtime": "python39",
"entry_point": "main",
"source_code": "def main(request):\n return 'Hello World!'",
"trigger_type": "http",
"memory_mb": 256
}
}
GKE Operations
Create Cluster
{
"tool": "gcp_gke_create_cluster",
"arguments": {
"project": "my-project",
"zone": "us-central1-a",
"cluster_name": "my-cluster",
"initial_node_count": 3,
"machine_type": "e2-standard-4",
"enable_autopilot": false,
"enable_autoscaling": true,
"min_nodes": 1,
"max_nodes": 10
}
}
Cloud Run Operations
Deploy Service
{
"tool": "gcp_cloudrun_deploy",
"arguments": {
"project": "my-project",
"region": "us-central1",
"service_name": "my-service",
"image": "gcr.io/my-project/my-image:latest",
"memory": "512Mi",
"cpu": "1",
"max_instances": 100,
"allow_unauthenticated": true
}
}
Vertex AI Operations
Deploy Model
{
"tool": "gcp_vertexai_deploy_model",
"arguments": {
"project": "my-project",
"region": "us-central1",
"model_name": "my-model",
"endpoint_name": "my-endpoint",
"machine_type": "n1-standard-4",
"min_replica_count": 1,
"max_replica_count": 3
}
}
Advanced Configuration
Multi-Project Support
from gcp_mcp import GCPMCPServer, ProjectConfig
# Configure multiple projects
projects = {
"production": ProjectConfig(
project_id="prod-project-id",
credentials_path="/path/to/prod-key.json",
default_region="us-central1"
),
"development": ProjectConfig(
project_id="dev-project-id",
credentials_path="/path/to/dev-key.json",
default_region="us-east1"
),
"data-warehouse": ProjectConfig(
project_id="dw-project-id",
credentials_path="/path/to/dw-key.json",
default_region="us-central1"
)
}
server = GCPMCPServer(projects=projects, default_project="production")
Service Account Impersonation
from gcp_mcp import GCPMCPServer, ImpersonationConfig
impersonation_config = ImpersonationConfig(
target_service_account="[email protected]",
lifetime_seconds=3600,
delegates=[],
target_scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
server = GCPMCPServer(impersonation_config=impersonation_config)
Cost Management
from gcp_mcp import GCPMCPServer, CostConfig
cost_config = CostConfig(
enable_cost_tracking=True,
budget_alert_threshold=1000.0, # USD
cost_allocation_labels=["team", "environment", "project"],
bigquery_billing_export_dataset="billing_export"
)
server = GCPMCPServer(cost_config=cost_config)
Integration Examples
See the examples/
directory for complete integration examples:
basic_operations.py
- Common GCP operationsmulti_project.py
- Managing multiple GCP projectsdata_pipeline.py
- Building data pipelines with BigQuery and Dataflowml_deployment.py
- Deploying ML models with Vertex AIinfrastructure_as_code.py
- Managing infrastructure programmaticallycost_optimization.py
- Cost analysis and optimization
Security Best Practices
- Use service accounts with minimal permissions
- Enable audit logging for all API calls
- Implement VPC Service Controls for data exfiltration prevention
- Use Workload Identity for GKE workloads
- Rotate service account keys regularly
- Enable organization policies for security constraints
- Use Cloud KMS for encryption key management
Error Handling
The server provides detailed error information:
try:
result = server.execute_tool("gcp_compute_create_instance", {
"name": "my-instance",
"zone": "invalid-zone"
})
except GCPError as e:
print(f"GCP error: {e.error_code} - {e.message}")
if e.error_code == "ZONE_NOT_FOUND":
print(f"Valid zones: {e.valid_zones}")
Performance Optimization
- Use batch operations where available
- Enable request caching for read operations
- Implement regional failover for high availability
- Use Cloud CDN for static content
- Optimize BigQuery queries with partitioning and clustering
Contributing
Contributions are welcome! Please read our contributing guidelines and submit pull requests.
License
MIT License - see LICENSE file for details