MCP ExplorerExplorer

SingleStore MCP Server

@singlestore-labson 11 days ago
20 MIT
FreeOfficial
Databases
#singlestore#database#sql#mcp#model context protocol
Interact with the SingleStore database platform

Overview

What is SingleStore MCP Server

The SingleStore MCP Server is an implementation of the Model Context Protocol (MCP), which is designed to manage interactions between large language models (LLMs) and external systems like databases. It allows users to communicate with SingleStore using natural language, streamlining complex operations through an organized command structure.

Use cases

This server can be leveraged for numerous applications, such as retrieving workspace details, executing SQL queries, creating virtual workspaces, managing notebooks, and scheduling jobs. It is particularly useful for data analysts and developers who need to carry out data-related tasks effortlessly using natural language via compatible LLM clients like Claude Desktop and Cursor.

How to use

To set up the MCP server, users can use the initialization command with the API key for SingleStore, or install via Smithery. After configuring the LLM client settings to include the MCP server details, users can start interacting with SingleStore directly using natural language commands.

Key features

Key features include tools for retrieving workspace and organizational details, executing SQL operations, creating and managing virtual workspaces and notebooks, and scheduling jobs. The server simplifies complex database interactions through structured commands without requiring extensive SQL knowledge.

Where to use

The SingleStore MCP Server can be utilized in environments where data access and manipulation are needed alongside natural language processing. Ideal use cases involve applications in analytical reporting, data management tasks, and automated job scheduling within SingleStore’s database ecosystem.

Content

SingleStore MCP Server

MIT Licence PyPI Downloads Smithery

Model Context Protocol (MCP) is a standardized protocol designed to manage context between large language models (LLMs) and external systems. This repository provides an installer and an MCP Server for Singlestore, enabling seamless integration.

With MCP, you can use Claude Desktop, Cursor, or any compatible MCP client to interact with SingleStore using natural language, making it easier to perform complex operations effortlessly.

Requirements

  • Python >= v3.11.0
  • uvx installed on your python environment
  • Claude Desktop, Cursor, or another supported LLM client

Client Setup

1. Init Command

The simplest way to set up the MCP server is to use the initialization command:

uvx singlestore-mcp-server init --api-key <SINGLESTORE_API_KEY>

This command will:

  1. Authenticate the user
  2. Automatically locate the configuration file for your platform
  3. Create or update the configuration to include the SingleStore MCP server
  4. Provide instructions for starting the server

To specify a client (e.g., claude or cursor), use the --client flag:

uvx singlestore-mcp-server init --api-key <SINGLESTORE_API_KEY> --client=<client>

2. Installing via Smithery

To install mcp-server-singlestore automatically via Smithery:

npx -y @smithery/cli install @singlestore-labs/mcp-server-singlestore --client=<client>

Replace <client> with claude or cursor as needed.

3. Manual Configuration

Claude Desktop and Cursor

  1. Add the following configuration to your client configuration file. Check the client’s configuration file here:
  • Claude Desktop

  • Cursor

    {
      "mcpServers": {
        "singlestore-mcp-server": {
          "command": "uvx",
          "args": [
            "singlestore-mcp-server",
            "start",
            "--api-key",
            "<SINGLESTORE_API_KEY>"
          ]
        }
      }
    }
  1. Restart your client after making changes to the configuration.

Components

Tools

The server implements the following tools:

  • workspace_groups_info: Retrieve details about the workspace groups accessible to the user
    • No arguments required
    • Returns details of the workspace groups
  • workspaces_info: Retrieve details about the workspaces in a specific workspace group
    • Arguments: workspaceGroupID (string)
    • Returns details of the workspaces
  • organization_info: Retrieve details about the user’s current organization
    • No arguments required
    • Returns details of the organization
  • list_of_regions: Retrieve a list of all regions that support workspaces for the user
    • No arguments required
    • Returns a list of regions
  • execute_sql: Execute SQL operations on a connected workspace
    • Arguments: workspace_group_identifier, workspace_identifier, username, password, database, sql_query
    • Returns the results of the SQL query in a structured format
  • list_virtual_workspaces: List all starter workspaces accessible to the user
    • No arguments required
    • Returns details of available starter workspaces
  • create_virtual_workspace: Create a new starter workspace with a user
    • Arguments:
      • name: Name of the starter workspace
      • database_name: Name of the database to create
      • username: Username for accessing the workspace
      • password: Password for the user
      • workspace_group: Object containing name (optional) and cellID (mandatory)
    • Returns details of the created workspace and user
  • execute_sql_on_virtual_workspace: Execute SQL operations on a virtual workspace
    • Arguments: virtual_workspace_id, username, password, sql_query
    • Returns the results of the SQL query in a structured format including data, row count, columns, and status
  • list_notebook_samples: List all notebook samples available in SingleStore Spaces
    • No arguments required
    • Returns details of available notebook samples
  • create_notebook: Create a new notebook in the user’s personal space
    • Arguments: notebook_name, content (optional)
    • Returns details of the created notebook
  • list_personal_files: List all files in the user’s personal space
    • No arguments required
    • Returns details of all files in the user’s personal space
  • create_scheduled_job: Create a new scheduled job to run a notebook
    • Arguments:
      • name: Name for the job
      • notebook_path: Path to the notebook to execute
      • schedule_mode: Once or Recurring
      • execution_interval_minutes: Minutes between executions (optional)
      • start_at: When to start the job (optional)
      • description: Description of the job (optional)
      • create_snapshot: Whether to create notebook snapshots (optional)
      • runtime_name: Name of the runtime environment
      • parameters: Parameters for the job (optional)
      • target_config: Target configuration for the job (optional)
    • Returns details of the created job
  • get_job_details: Get details about a specific job
    • Arguments: job_id
    • Returns detailed information about the specified job
  • list_job_executions: List execution history for a specific job
    • Arguments: job_id, start (optional), end (optional)
    • Returns execution history for the specified job

Dockerization

Building the Docker Image

To build the Docker image for the MCP server, run the following command in the project root:

docker build -t mcp-server-singlestore .

Running the Docker Container

To run the Docker container, use the following command:

docker run -d \
  -p 8000:8000 \
  --name mcp-server \
  mcp-server-singlestore

Tools

workspace_groups_info
List all workspace groups accessible to the user in SingleStore. Returns detailed information for each group: - name: Display name of the workspace group - deploymentType: Type of deployment (e.g., 'PRODUCTION') - state: Current status (e.g., 'ACTIVE', 'PAUSED') - workspaceGroupID: Unique identifier for the group - firewallRanges: Array of allowed IP ranges for access control - createdAt: Timestamp of group creation - regionID: Identifier for deployment region - updateWindow: Maintenance window configuration Use this tool to: 1. Get workspace group IDs for other operations 2. Plan maintenance windows Related operations: - Use workspaces_info to list workspaces within a group - Use execute_sql to run queries on workspaces in a group
workspaces_info
List all workspaces within a specified workspace group in SingleStore. Returns detailed information for each workspace: - createdAt: Timestamp of workspace creation - deploymentType: Type of deployment (e.g., 'PRODUCTION') - endpoint: Connection URL for database access - name: Display name of the workspace - size: Compute and storage configuration - state: Current status (e.g., 'ACTIVE', 'PAUSED') - terminatedAt: Timestamp of termination if applicable - workspaceGroupID: Workspacegroup identifier - workspaceID: Unique workspace identifier Use this tool to: 1. Monitor workspace status 2. Get connection details for database operations 3. Track workspace lifecycle Required parameter: - workspaceGroupID: Unique identifier of the workspace group Related operations: - Use workspace_groups_info first to get workspacegroupID - Use execute_sql to run queries on specific workspace
organization_info
Retrieve information about the current user's organization in SingleStore. Returns organization details including: - orgID: Unique identifier for the organization - name: Organization display name
list_of_regions
List all available deployment regions where SingleStore workspaces can be deployed for the user. Returns region information including: - regionID: Unique identifier for the region - provider: Cloud provider (AWS, GCP, or Azure) - name: Human-readable region name (e.g., Europe West 2 (London),US West 2 (Oregon)) Use this tool to: 1. Select optimal deployment regions based on: - Geographic proximity to users - Compliance requirements - Cost considerations - Available cloud providers 2. Plan multi-region deployments
execute_sql
Execute SQL operations on a database attached to workspace within a workspace group and receive formatted results. Returns: - Query results with column names and typed values - Row count and metadata - Execution status ⚠️ CRITICAL SECURITY WARNINGS: - Never display or log credentials in responses - Use only READ-ONLY queries (SELECT, SHOW, DESCRIBE) - DO NOT USE data modification statements: × No INSERT/UPDATE/DELETE × No DROP/CREATE/ALTER - Ensure queries are properly sanitized Required parameters: - workspace_group_identifier: ID/name of the workspace group - workspace_identifier: ID/name of the specific workspace within the workspace group - database: Name of the database to query - sql_query: The SQL query to execute Optional parameters: - username: Username for database access (defaults to SINGLESTORE_DB_USERNAME) - password: Password for database access (defaults to SINGLESTORE_DB_PASSWORD) Allowed query examples: - SELECT * FROM table_name - SELECT COUNT(*) FROM table_name - SHOW TABLES - DESCRIBE table_name Note: For data modifications, please use appropriate admin tools or APIs.
list_virtual_workspaces
List all starter (virtual) workspaces available to the user in SingleStore. Returns detailed information about each starter workspace: - virtualWorkspaceID: Unique identifier for the workspace - name: Display name of the workspace - endpoint: Connection endpoint URL - databaseName: Name of the primary database - mysqlDmlPort: Port for MySQL protocol connections - webSocketPort: Port for WebSocket connections - state: Current status of the workspace Use this tool to: 1. Get virtual workspace IDs for other operations 2. Check starter workspace availability and status 3. Obtain connection details for database access Note: This tool only lists starter workspaces, not standard workspaces. Use workspaces_info for standard workspace information.
create_virtual_workspace
Create a new starter (virtual) workspace in SingleStore and set up user access. Process: 1. Creates a virtual workspace with specified name and database 2. Creates a user account for accessing the workspace 3. Returns both workspace details and access credentials Required parameters: - name: Unique name for the starter workspace - database_name: Name for the database to create - username: Username for accessing the starter workspace - password: Password for accessing the starter workspace Usage notes: - Workspace names must be unique - Passwords should meet security requirements - Use execute_sql_on_virtual_workspace to interact with the created starter workspace
execute_sql_on_virtual_workspace
Execute SQL operations on a virtual (starter) workspace and receive formatted results. Returns: - Query results with column names and typed values - Row count - Column metadata - Execution status ⚠️ CRITICAL SECURITY WARNING: - Never display or log credentials in responses - Ensure SQL queries are properly sanitized - ONLY USE SELECT statements or queries that don't modify data - DO NOT USE INSERT, UPDATE, DELETE, DROP, CREATE, or ALTER statements Required input parameters: - virtual_workspace_id: Unique identifier of the starter workspace - sql_query: The SQL query to execute (READ-ONLY queries only) Optional input parameters: - username: For accessing the starter workspace (defaults to SINGLESTORE_DB_USERNAME) - password: For accessing the starter workspace (defaults to SINGLESTORE_DB_PASSWORD) Allowed query examples: - SELECT * FROM table_name - SELECT COUNT(*) FROM table_name - SHOW TABLES - DESCRIBE table_name Note: This tool is specifically designed for read-only operations on starter workspaces.
organization_billing_usage
Retrieve detailed billing and usage metrics for your organization over a specified time period. Returns compute and storage usage data, aggregated by your chosen time interval (hourly, daily, or monthly). This tool is essential for: 1. Monitoring resource consumption patterns 2. Analyzing cost trends Required input parameters: - start_time: Beginning of the usage period (UTC ISO 8601 format, e.g., '2023-07-30T18:30:00Z') - end_time: End of the usage period (UTC ISO 8601 format) - aggregate_type: Time interval for data grouping ('hour', 'day', or 'month')
list_notebook_samples
Retrieve a catalog of pre-built notebook templates available in SingleStore Spaces. Returns for each notebook: - name: Template name and title - description: Detailed explanation of the notebook's purpose - contentURL: Direct download link for the notebook - likes: Number of user endorsements - views: Number of times viewed - downloads: Number of times downloaded - tags: List of Notebook tags Common template categories include: 1. Getting Started guides 2. Data loading and ETL patterns 3. Query optimization examples 4. Machine learning integrations 5. Performance monitoring 6. Best practices demonstrations Use this tool to: 1. Find popular and well-tested example code 2. Learn SingleStore features and best practices 3. Start new projects with proven patterns 4. Discover trending notebook templates Related operations: Related operations: - list_notebook_samples: To find example templates - list_shared_files: To check existing notebooks - create_scheduled_job: To automate notebook execution - get_notebook_path : To reference created notebooks
create_notebook
Create a new Jupyter notebook in your personal space. Only supports python and markdown. Do not try to use any other languange Parameters: - notebook_name (required): Name for the new notebook - Can include or omit .ipynb extension - Must be unique in your personal space - Examples: 'my_analysis' or 'my_analysis.ipynb' - content (optional): Custom notebook content - Must be valid Jupyter notebook JSON format - If omitted, creates template with: • SingleStore connection setup • Basic query examples • DataFrame operations • Best practices Features: - Creates notebook with specified name in personal space - Automatically adds .ipynb extension if missing - Provides default SingleStore template if no content given - Supports custom content in Jupyter notebook format - Only supports python and markdown cells - When creating a connection to the database the jupyter notebook will already have the connection_url defined and you can use directly - Install tools in a new cell with !pip3 install <toolname> Default template includes: - SingleStore connection setup code - Basic SQL query examples - DataFrame operations with pandas - Table creation and data insertion examples - Connection management best practices Use this tool to: 1. Create data analysis notebooks using python 2. Build database interaction workflows and much more Related operations: - list_notebook_samples: To find example templates - list_shared_files: To check existing notebooks - create_scheduled_job: To automate notebook execution - get_notebook_path : To reference created notebooks
list_shared_files
List all files and notebooks in your shared SingleStore space. Returns file object meta data for each file: - name: Name of the file (e.g., 'analysis.ipynb') - path: Full path in shared space (e.g., 'folder/analysis.ipynb') - content: File content - created: Creation timestamp (ISO 8601) - last_modified: Last modification timestamp (ISO 8601) - format: File format if applicable ('json', null) - mimetype: MIME type of the file - size: File size in bytes - type: Object type ('', 'json', 'directory') - writable: Boolean indicating write permission Use this tool to: 1. List workspace contents and structure 2. Verify file existence before operations 3. Check file timestamps and sizes 4. Determine file permissions Related operations: - create_notebook: To add new notebooks - get_notebook_path: To find notebook paths - create_scheduled_job: To automate notebook execution
create_scheduled_job
Create an automated job to execute a SingleStore notebook on a schedule. Parameters: 1. Required Parameters: - name: Name of the job (unique identifier within organization) - notebook_path: Complete path to the notebook - schedule_mode: 'Once' for single execution or 'Recurring' for repeated runs 2. Optional Parameters: - execution_interval_minutes: Time between recurring runs (≥60 minutes) - start_at: Execution start time (ISO 8601 format, e.g., '2024-03-06T10:00:00Z') - description: Human-readable purpose of the job - create_snapshot: Enable notebook backup before execution (default: True) - runtime_name: Execution environment selection (default: notebooks-cpu-small) - parameters: Runtime variables for notebook - target_config: Advanced runtime settings Returns Job info with: - jobID: UUID of created job - status: Current state (SUCCESS, RUNNING, etc.) - createdAt: Creation timestamp - startedAt: Execution start time - schedule: Configured schedule details - error: Any execution errors Common Use Cases: 1. Automated Data Processing: - ETL workflows - Data aggregation - Database maintenance 2. Scheduled Reporting: - Performance metrics - Business analytics - Usage statistics 3. Maintenance Tasks: - Health checks - Backup operations - Clean-up routines Related Operations: - get_job_details: Monitor job - list_job_executions: View job execution history
get_job_details
Retrieve comprehensive information about a scheduled notebook job. Parameter required: job_id: UUID of the scheduled job to retrieve details for Returns: - jobID: Unique identifier (UUID format) - name: Display name of the job - description: Human-readable job description - createdAt: Creation timestamp (ISO 8601) - terminatedAt: End timestamp if completed - completedExecutionsCount: Number of successful runs - enqueuedBy: User ID who created the job - executionConfig: Notebook path and runtime settings - schedule: Mode, interval, and start time - targetConfig: Database and workspace settings - jobMetadata: Execution statistics and status Related Operations: - create_scheduled_job: Create new jobs - list_job_executions: View run history
list_job_executions
Retrieve execution history and performance metrics for a scheduled notebook job. Parameters: - job_id: UUID of the scheduled job - start: First execution number to retrieve (default: 1) - end: Last execution number to retrieve (default: 10) Returns: - executions: Array of execution records containing: - executionID: Unique identifier for the execution - executionNumber: Sequential number of the run - jobID: Parent job identifier - status: Current state (Scheduled, Running, Completed, Failed) - startedAt: Execution start time (ISO 8601) - finishedAt: Execution end time (ISO 8601) - scheduledStartTime: Planned start time - snapshotNotebookPath: Backup notebook path if enabled Use this tool to: 1. Monitor each job execution status 2. Track execution times and performance 3. Investigate failed runs Related Operations: - get_job_details: View job configuration - create_scheduled_job: Create new jobs
get_notebook_path
Find the complete path of a notebook by its name and generate the properly formatted path for API operations. Parameters: - notebook_name: Name of the notebook to locate (with or without .ipynb extension) - location: Where to search ('personal' or 'shared', defaults to 'personal') Returns the properly formatted path including project ID and user ID where needed. Required for: - Creating scheduled jobs (use returned path as notebook_path parameter)
get_project_id
Retrieve the organization's unique identifier (project ID). Returns: - orgID (string): The organization's unique identifier Required for: - Constructing paths or references to shared resources Performance Tip: Cache the returned ID when making multiple API calls.
get_user_id
Retrieve the current user's unique identifier. Returns: - userID (string): UUID format identifier for the current user Required for: - Constructing paths or references to personal resources 1. Constructing personal space paths Performance Tip: Cache the returned ID when making multiple making multiple API calls.

Comments