Mxcp
Overview
What is Mxcp
MXCP is an enterprise-grade data-to-AI infrastructure designed for production environments, focusing on security, governance, and scalability.
Use cases
Use cases for MXCP include transforming data workflows, serving validated data APIs, and integrating with tools like dbt for efficient data management.
How to use
To use MXCP, install it via pip, create a project directory, initialize it, and then serve your data. You can connect it to Claude Desktop for querying.
Key features
Key features of MXCP include enterprise security with policy enforcement, a developer-friendly experience, dbt native caching, production readiness with type safety and drift detection, and comprehensive data governance.
Where to use
MXCP is suitable for various fields that require secure data handling and AI integration, such as finance, healthcare, and data analytics.
Content
MXCP: Enterprise-Grade Data-to-AI Infrastructure
The MCP server built for production: Transform your data into AI-ready interfaces with enterprise security, audit trails, and policy enforcement
π What Makes MXCP Different?
While other MCP servers focus on simple data access, MXCP is built for production environments where security, governance, and scalability matter:
- π Enterprise Security: Policy enforcement, audit logging, OAuth authentication
- β‘ Developer Experience: Go from SQL to AI interface in under 60 seconds
- π― dbt Native: Cache data locally with dbt, serve instantly via MCP
- π‘οΈ Production Ready: Type safety, drift detection, comprehensive validation
- π Data Governance: Track every query, enforce access controls, mask sensitive data
π― 60-Second Quickstart
Experience the power of MXCP in under a minute:
# 1. Install and create project (15 seconds)
pip install mxcp
mkdir my-data-api && cd my-data-api
mxcp init --bootstrap
# 2. Start serving your data (5 seconds)
mxcp serve
# 3. Connect to Claude Desktop (40 seconds)
# Add this to your Claude config:
{
"mcpServers": {
"my-data": {
"command": "mxcp",
"args": ["serve", "--transport", "stdio"],
"cwd": "/path/to/my-data-api"
}
}
}
Result: You now have a type-safe, validated data API that Claude can use to query your data with full audit trails and policy enforcement.
π‘ Real-World Example: dbt + Data Caching
See how MXCP transforms data workflows with our COVID-19 example:
# Clone and run the COVID example
git clone https://github.com/raw-labs/mxcp.git
cd mxcp/examples/covid_owid
# Cache data locally with dbt (this is the magic!)
dbt run # Transforms and caches OWID data locally
# Serve cached data via MCP
mxcp serve
What just happened?
- dbt models fetch and transform COVID data from Our World in Data into DuckDB tables
- DuckDB stores the transformed data locally for lightning-fast queries
- MCP endpoints query the DuckDB tables directly (no dbt syntax needed)
- Audit logs track every query for compliance
- Policies can enforce who sees what data
Ask Claude: βShow me COVID vaccination rates in Germany vs Franceβ - and it queries the covid_data
table instantly, with full audit trails.
π‘οΈ Enterprise Features That Set Us Apart
Policy Enforcement
# Control who can access what data
policies:
input:
- condition: "!('hr.read' in user.permissions)"
action: deny
reason: "Missing HR read permission"
output:
- condition: "user.role != 'admin'"
action: filter_fields
fields: ["salary", "ssn"]
Audit Logging
# Track every query with enterprise-grade logging
mxcp log --since 1h --status error
mxcp log --tool employee_data --export-duckdb audit.db
Authentication & Authorization
- OAuth integration (GitHub, Atlassian, custom)
- Role-based access control
- Fine-grained permissions
- Session management
ποΈ Architecture: Built for Production
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ β LLM Client β β MXCP β β Data Sources β β (Claude, etc) βββββΊβ (Security βββββΊβ (DB, APIs, β β β β Audit β β Files, dbt) β β β β Policies) β β β βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ β βΌ ββββββββββββββββ β Audit Logs β β (JSONL/DB) β ββββββββββββββββ
Unlike simple data connectors, MXCP provides:
- Security layer between LLMs and your data
- Audit trail for every query and result
- Policy engine for fine-grained access control
- Type system for LLM safety and validation
- Development workflow with testing and drift detection
π Quick Start
# Install globally
pip install mxcp
# Install with Vault support (optional)
pip install "mxcp[vault]"
# Or develop locally
git clone https://github.com/raw-labs/mxcp.git && cd mxcp
python -m venv .venv && source .venv/bin/activate
pip install -e .
Try the included examples:
# Simple data queries
cd examples/earthquakes && mxcp serve
# Enterprise features (policies, audit, dbt)
cd examples/covid_owid && dbt run && mxcp serve
π‘ Key Features
1. Declarative Interface Definition
# tools/analyze_sales.yml
mxcp: "1.0.0"
tool:
name: analyze_sales
description: "Analyze sales data with automatic caching"
parameters:
- name: region
type: string
description: "Sales region to analyze"
return:
type: object
properties:
total_sales: { type: number }
top_products: { type: array }
source:
code: |
-- This queries the table created by dbt
SELECT
SUM(amount) as total_sales,
array_agg(product) as top_products
FROM sales_summary -- Table created by dbt model
WHERE region = $region
2. dbt Integration
-- models/sales_summary.sql (dbt model)
{{ config(materialized='table') }}
SELECT
region,
product,
SUM(amount) as amount,
created_at::date as sale_date
FROM {{ source('raw', 'sales_data') }}
WHERE created_at >= current_date - interval '90 days'
GROUP BY region, product, sale_date
Why this matters: dbt creates optimized tables in DuckDB, MXCP endpoints query them directly - perfect separation of concerns with caching, transformations, and governance built-in.
3. Production-Ready Security
- Authentication: OAuth, API keys, session management
- Authorization: Role-based access, permission checking
- Audit: Every query logged with user context
- Policies: Dynamic data filtering and access control
- Drift Detection: Monitor schema changes across environments
π οΈ Core Concepts
Tools, Resources, Prompts
Define your AI interface using MCP (Model Context Protocol) specs:
- Tools β Functions that process data and return results
- Resources β Data sources and caches
- Prompts β Templates for LLM interactions
Project Structure
your-project/ βββ mxcp-site.yml # Project configuration βββ tools/ # Tool definitions βββ resources/ # Data sources βββ prompts/ # LLM templates βββ models/ # dbt transformations & caches
CLI Commands
mxcp serve # Start production MCP server
mxcp init # Initialize new project
mxcp list # List all endpoints
mxcp validate # Check types, SQL, and references
mxcp test # Run endpoint tests
mxcp dbt run # Run dbt transformations
mxcp log # Query audit logs
mxcp drift-check # Check for schema changes
π LLM Integration
MXCP implements the Model Context Protocol (MCP), making it compatible with:
- Claude Desktop β Native MCP support
- OpenAI-compatible tools β Via MCP adapters
- Custom integrations β Using the MCP specification
For specific setup instructions, see:
- Earthquakes Example β Complete Claude Desktop setup
- COVID + dbt Example β Advanced dbt integration
- Integration Guide β All client integrations
π Documentation
Get Started:
- Quickstart β Advanced features and patterns
- Configuration β Project setup and profiles
- CLI Reference β All commands and options
Production Features:
- Authentication β OAuth and security setup
- Policy Enforcement β Access control and data filtering
- Audit Logging β Enterprise-grade execution tracking
- Drift Detection β Schema monitoring
Advanced:
- Type System β Data types and validation
- Plugins β Custom Python functions in DuckDB
- Integrations β Data sources and external tools
π€ Contributing
We welcome contributions! See our development guide to get started.
π’ Enterprise Support
MXCP is developed by RAW Labs for production data-to-AI workflows. For enterprise support, custom integrations, or consulting:
- π§ Contact: [email protected]
- π Website: www.raw-labs.com
Built for the modern data stack: Combines dbtβs modeling power, DuckDBβs performance, and enterprise-grade security into a single AI-ready platform.