MCP ExplorerExplorer

Mcp Runner

@Streamline-TSon 9 months ago
1 MIT
FreeCommunity
AI Systems
Runs and manages MCP servers

Overview

What is Mcp Runner

mcp-runner is a Rust library designed for running and managing Model Context Protocol (MCP) servers locally, providing a comprehensive solution for interacting with these servers.

Use cases

Use cases for mcp-runner include setting up local development environments for testing MCP servers, integrating MCP servers into applications for model management, and creating tools that leverage the capabilities of MCP servers.

How to use

To use mcp-runner, add it as a dependency in your Cargo.toml file. You can then create an instance of McpRunner, start the servers, and interact with them using JSON-RPC for communication.

Key features

Key features of mcp-runner include the ability to start and manage multiple MCP server processes, configure them through a unified interface, communicate using JSON-RPC, list and call tools exposed by the servers, access resources, and proxy Server-Sent Events (SSE).

Where to use

mcp-runner can be used in various fields such as software development, data processing, and any application that requires interaction with MCP servers for model management and context handling.

Content

MCP Runner

A Rust library for running and interacting with Model Context Protocol (MCP) servers locally.

Crates.io
Documentation

Overview

MCP Runner provides a complete solution for managing Model Context Protocol servers in Rust applications. It enables:

  • Starting and managing MCP server processes
  • Configuring multiple servers through a unified interface
  • Communicating with MCP servers using JSON-RPC
  • Listing and calling tools exposed by MCP servers
  • Accessing resources provided by MCP servers
  • Proxying Server-Sent Events (SSE) to enable clients to connect to MCP servers

Installation

Add this to your Cargo.toml:

[dependencies]
mcp-runner = "0.3.1"

Quick Start

Here’s a simple example of using MCP Runner to start a server and call a tool:

use mcp_runner::{McpRunner, error::Result};
use serde::{Deserialize, Serialize};
use serde_json::json;

#[tokio::main]
async fn main() -> Result<()> {
    // Create runner from config file
    let mut runner = McpRunner::from_config_file("config.json")?;

    // Start all servers and the SSE proxy if configured
    let (server_ids, proxy_started) = runner.start_all_with_proxy().await;
    let server_ids = server_ids?;

    if proxy_started {
        println!("SSE proxy started successfully");
    }

    // Get client for interacting with a specific server
    let server_id = runner.get_server_id("fetch")?;
    let client = runner.get_client(server_id)?;

    // Initialize the client
    client.initialize().await?;

    // List available tools
    let tools = client.list_tools().await?;
    println!("Available tools:");
    for tool in tools {
        println!("  - {}: {}", tool.name, tool.description);
    }

    // Call the fetch tool with structured input
    let fetch_result = client.call_tool("fetch", &json!({
        "url": "https://modelcontextprotocol.io"
    })).await?;
    println!("Fetch result: {}", fetch_result);

    // Stop the server when done
    runner.stop_server(server_id).await?;

    Ok(())
}

Observability

This library uses the tracing crate for logging and diagnostics. To enable logging, ensure you have a tracing_subscriber configured in your application and set the RUST_LOG environment variable. For example:

# Show info level logs for all crates
RUST_LOG=info cargo run --example simple_client

# Show trace level logs specifically for mcp_runner
RUST_LOG=mcp_runner=trace cargo run --example simple_client

Configuration

MCP Runner uses JSON configuration to define MCP servers and optional SSE proxy settings.

{
  "mcpServers": {
    "fetch": {
      "command": "uvx",
      "args": [
        "mcp-server-fetch"
      ]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/path/to/allowed/files"
      ]
    }
  },
  "sseProxy": {
    "address": "127.0.0.1",
    "port": 3000,
    "allowedServers": [
      "fetch",
      "filesystem"
    ],
    "authenticate": {
      "bearer": {
        "token": "your-secure-token"
      }
    }
  }
}

You can load configurations in three different ways:

1. Load from a file

use mcp_runner::McpRunner;

let runner = McpRunner::from_config_file("config.json")?;

2. Load from a JSON string

use mcp_runner::McpRunner;

let config_json = r#"{
  "mcpServers": {
    "fetch": {
      "command": "uvx",
      "args": ["mcp-server-fetch"]
    }
  }
}"#;
let runner = McpRunner::from_config_str(config_json)?;

3. Create programmatically

use mcp_runner::{McpRunner, config::{Config, ServerConfig}};
use std::collections::HashMap;

let mut servers = HashMap::new();

let server_config = ServerConfig {
    command: "uvx".to_string(),
    args: vec!["mcp-server-fetch".to_string()],
    env: HashMap::new(),
};

servers.insert("fetch".to_string(), server_config);
let config = Config { mcp_servers: servers };

// Initialize the runner
let runner = McpRunner::new(config);

Error Handling

MCP Runner uses a custom error type that covers:

  • Configuration errors
  • Server lifecycle errors
  • Communication errors
  • Serialization errors
match result {
    Ok(value) => println!("Success: {:?}", value),
    Err(Error::ServerNotFound(name)) => println!("Server not found: {}", name),
    Err(Error::Communication(msg)) => println!("Communication error: {}", msg),
    Err(e) => println!("Other error: {}", e),
}

Core Components

McpRunner

The main entry point for managing MCP servers:

let mut runner = McpRunner::from_config_file("config.json")?;
let server_ids = runner.start_all_servers().await?;

McpClient

For interacting with MCP servers:

let client = runner.get_client(server_id)?;
client.initialize().await?;

// Call tools
let result = client.call_tool("fetch", &json!({
    "url": "https://example.com",
})).await?;

SSE Proxy

The SSE (Server-Sent Events) proxy allows clients to connect to MCP servers through HTTP and receive real-time updates using the Server-Sent Events protocol. Built on Actix Web, it provides a unified JSON-RPC over HTTP interface with high performance, reliability, and maintainability.

Features

  • Unified JSON-RPC API: Single endpoint for all MCP server interactions via JSON-RPC
  • Authentication: Optional Bearer token authentication for secure access
  • Server Access Control: Restrict which servers can be accessed through the proxy
  • Event Streaming: Real-time updates from MCP servers to clients via SSE
  • Cross-Origin Support: Built-in CORS support for web browser clients
  • JSON-RPC Compatibility: Full support for JSON-RPC 2.0 messages in both directions
  • Efficient Event Broadcasting: Uses Tokio broadcast channels for efficient event distribution

Starting the Proxy

You can start the SSE proxy automatically when starting your servers:

// Start all servers and the proxy if configured
let (server_ids, proxy_started) = runner.start_all_with_proxy().await;
let server_ids = server_ids?;

if proxy_started {
    println!("SSE proxy started successfully");
}

Or manually start it after configuring your servers:

if runner.is_sse_proxy_configured() {
    runner.start_sse_proxy().await?;
    println!("SSE proxy started manually");
}

Proxy Configuration

Configure the SSE proxy in your configuration file:

Proxy API Endpoints

The SSE proxy exposes the following HTTP endpoints:

Endpoint Method Description
/sse GET SSE event stream endpoint for receiving real-time updates (sends endpoint and message events)
/sse/messages POST JSON-RPC endpoint for sending requests to MCP servers (supports initialize, tools/list, tools/call, ping)

Examples

Check the examples/ directory for more usage examples:

  • simple_client.rs: Basic usage of the client API

    # Run with info level logging
    RUST_LOG=info cargo run --example simple_client
    
  • sse_proxy.rs: Example of using the SSE proxy to expose MCP servers to web clients

    # Run with info level logging
    RUST_LOG=info cargo run --example sse_proxy
    

    This example uses the config in examples/sse_config.json to start servers and an SSE proxy,
    allowing web clients to connect and interact with MCP servers through HTTP and SSE.

    JavaScript client example:

    // Connect to the event stream
    const eventSource = new EventSource('http://localhost:3000/sse');
    
    // First you'll receive the endpoint information
    eventSource.addEventListener('endpoint', (event) => {
      console.log('Received endpoint path:', event.data);
    });
    
    // Then you'll receive JSON-RPC responses
    eventSource.addEventListener('message', (event) => {
      const response = JSON.parse(event.data);
      console.log('Received JSON-RPC response:', response);
    });
    
    // Make a tool call
    async function callTool() {
      const response = await fetch('http://localhost:3000/sse/messages', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': 'Bearer your-secure-token-here'
        },
        body: JSON.stringify({
          jsonrpc: '2.0',
          id: 1,
          method: 'tools/call',
          params: {
            server: 'fetch',
            tool: 'fetch',
            arguments: {
              url: 'https://modelcontextprotocol.io'
            }
          }
        })
      });
      const result = await response.json();
      console.log('Tool call initiated:', result);
      // Actual response will come through the SSE event stream
    }
    

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the terms in the LICENSE file.

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers