MCP ExplorerExplorer

Tailscale Network Mcp Server

@cavanaughdesignon 13 days ago
1 MIT
FreeCommunity
AI Systems
A distributed model context server for AI agents, leveraging Tailscale for secure networking.

Overview

What is Tailscale Network Mcp Server

The tailscale-network-mcp-server is a distributed model context server designed for AI agents, utilizing Tailscale for secure networking. It allows for seamless communication across various environments while maintaining a zero-trust security model.

Use cases

Use cases include managing AI agent interactions, providing low-latency access to context data across regions, and ensuring secure communication between distributed components in various environments.

How to use

To use tailscale-network-mcp-server, clone the repository, create a .env file with your Tailscale auth key, and start the Docker containers using Docker Compose. Once set up, you can access the AI Agent.

Key features

Key features include secure context access via Tailscale, distributed storage across multiple environments, edge caching for reduced latency, real-time updates, context versioning, and a zero-trust networking model.

Where to use

tailscale-network-mcp-server can be used in fields such as AI development, distributed systems, cloud computing, and edge computing, where secure and efficient context management is essential.

Content

Tailscale Network-Powered Model Context Server

A distributed model context server for AI agents, leveraging Tailscale for secure networking.

Architecture

This project implements a distributed model context server with the following components:

  • Central Context Authority: The primary context server that handles versioning and consistency
  • Regional Context Servers: Servers deployed in different regions/environments for lower latency access
  • Edge Context Caches: Local caching servers for frequently accessed context
  • AI Agent Simulator: A simulated AI agent that interacts with the context server

All components are connected via a secure Tailscale network, allowing them to communicate seamlessly across different environments (cloud, on-premise, edge) while maintaining zero-trust security.

Architecture Diagram

Features

  • Secure Context Access: All context access is authenticated and encrypted via Tailscale
  • Distributed Storage: Context can be stored and accessed across multiple environments
  • Caching: Edge caching for frequently accessed context to reduce latency
  • Real-time Updates: Changes to context are propagated in real-time to relevant servers
  • Context Versioning: Track changes to context over time
  • Zero-Trust Networking: Tailscale provides secure networking without exposing services to the public internet

Prerequisites

Setup

  1. Clone this repository:

    git clone https://github.com/yourusername/tailscale-model-context-server.git
    cd tailscale-model-context-server
    
  2. Create a .env file with your Tailscale auth key:

    TAILSCALE_AUTH_KEY=tskey-auth-xxxxxxxxxxxxxx
    
  3. Start the containers:

    docker-compose up -d
    
  4. Access the AI Agent Simulator:

    http://localhost:8080
    

System Components

Context Server

The context server is responsible for storing and retrieving context data. It has three deployment modes:

  • Central: The main authority for context data
  • Regional: Regional servers for lower latency access
  • Cache: Edge caches for frequently accessed context

Tailscale Integration

The Tailscale integration handles secure networking between components:

  • Automatic discovery of other context servers in the tailnet
  • Authentication and encryption of all traffic
  • NAT traversal for communication across different networks

AI Agent

The AI agent simulator demonstrates how an agent would interact with the context server:

  • Create and manage conversation contexts
  • Store and retrieve context data
  • Process context for simulated AI tasks

Deployment Options

Local Development

For local development, you can run the components individually:

# Start the central server
npm run start:central

# Start a regional server
npm run start:regional

# Start an edge cache
npm run start:cache

# Start the agent simulator
cd agent-simulator
npm run start

Docker Deployment

Use Docker Compose to start the entire system:

docker-compose up -d

Cloud Deployment

For production deployment, you can use:

  • AWS: Deploy using ECS or EKS
  • GCP: Deploy using GKE
  • Azure: Deploy using AKS
  • Hybrid: Deploy across multiple environments, connected via Tailscale

API Documentation

Context Server API

  • GET /contexts/:contextId - Get a context by ID
  • PUT /contexts/:contextId - Create or update a context
  • DELETE /contexts/:contextId - Delete a context
  • GET /contexts - List available contexts
  • GET /contexts/:contextId/metadata - Get context metadata
  • GET /contexts/:contextId/stream - Stream updates for a context

AI Agent API

  • POST /conversations - Create a new conversation
  • GET /conversations - List conversations
  • GET /conversations/:conversationId - Get a conversation
  • POST /conversations/:conversationId/messages - Add a message to a conversation
  • GET /contexts/:contextId/process - Process a context
  • GET /status - Get agent status

License

MIT

Tools

No tools

Comments