MCP ExplorerExplorer

Diffcalculia Mcp

@createthison a year ago
1 MIT
FreeCommunity
AI Systems
A TypeScript MCP server for AIs to edit local files using unified diffs.

Overview

What is Diffcalculia Mcp

diffcalculia_mcp is a Model Context Protocol (MCP) server designed to enable AIs to extend their capabilities by editing local files using a unified diff format. It is implemented in Typescript and utilizes libraries like diffcalculia-ts and jsdiff for processing diffs.

Use cases

Use cases for diffcalculia_mcp include automating code refactoring, generating patches for version control, and enabling AI-driven code reviews where the AI suggests changes and applies them directly to the codebase.

How to use

To use diffcalculia_mcp, first install the necessary dependencies with ‘npm install’. Then, build the Docker image using ‘npm run build-docker’. Finally, run the server in a Docker container, ensuring to set the WORKSPACE_BASE environment variable to the directory you want the AI to modify.

Key features

Key features of diffcalculia_mcp include the ability to edit local files securely within a Docker container, support for unified diff format, and integration with libraries for diff processing. It also provides isolation for AI modifications to a specified directory.

Where to use

diffcalculia_mcp can be used in software development environments where AIs are employed to assist in code modifications, version control systems, and automated testing frameworks that require file editing capabilities.

Content

diffcalculia MCP server

A Model Context Protocol (MCP) server is a server AIs can use to extend their capabilities.
You can read about them here: https://modelcontextprotocol.io/

This particular server is written in Typescript and provides a tool the AI can use to edit
local files by passing a path and a diff in unified diff format. It uses https://github.com/createthis/diffcalculia-ts
internally to fix diffs before applying them using https://github.com/kpdecker/jsdiff.

WARNING: This software is designed to allow AIs to edit files on your local machine. I strongly
strongly recommend you run it under docker and mount your code directory so you have a degree of
isolation. If you choose to ignore this warning, you do so at your own risk.

This server is currently implemented using the StreamableHttp transport layer. It starts on port 3002.

Installation

  1. Install dependencies:
npm install

Build docker image

First, install Docker Desktop. Then:

npm run build-docker

Run the server under docker

Running diffcalculia-mcp under docker gives you the ability to isolate the changes
your AI can make to just one directory on your machine. This is HIGHLY RECOMMENDED!

export WORKSPACE_BASE=/path/to/directory/you/want/AI/to/modify
docker run -it --rm \
  -p 3002:3002 \
  -v $WORKSPACE_BASE:/workspace \
  -e SANDBOX_USER_ID=$(id -u) \
  diffcalculia-mcp

Open Hands AI

None of this works before 0.37 and 0.38 hasn’t been released yet as of this writing on May 11th 2025.

To use this with Open Hands AI under Docker:

  1. First, build your docker container and run it (see instructions above). It is important that your
    WORKSPACE_BASE for this MCP server matches the WORKSPACE_BASE you are using with Open Hands AI.

  2. You may need to delete your settings file and start fresh. The new version of Open Hands has an
    MCP settings editor UI, but it doesn’t seem to like old settings files.

    The settings file lives here: ~/.openhands-state/settings.json.

  3. Start your Open Hands AI docker. My full example command looks like this:

    export WORKSPACE_BASE=/path/to/directory/you/want/AI/to/modify
    docker run -it --rm   \
     -p 3001:3000   \
     -e SANDBOX_USER_ID=$(id -u) \
     -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
     -v $WORKSPACE_BASE:/opt/workspace_base \
     -e AGENT_ENABLE_PROMPT_EXTENSIONS=false \
     -e LOG_ALL_EVENTS=true \
     -e LLM_NATIVE_TOOL_CALLING=false \
     -v ~/.openhands-state:/.openhands-state \
     -v /var/run/docker.sock:/var/run/docker.sock   \
     --add-host host.docker.internal:host-gateway  \
     -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:a7cec86-nikolaik   \
     --name openhands-app-a7cec86   \
     docker.all-hands.dev/all-hands-ai/openhands:a7cec86
    

    Eventually we will be able to add:

     -e AGENT_ENABLE_EDITOR=false \
    

    But not until this is merged: https://github.com/All-Hands-AI/OpenHands/issues/8304

  4. Navigate to open hands in your browser. Setup your MCP server using the Settings UI. The URL should be:

    http://host.docker.internal:3002/sse
    

    Note the /sse postfix. This is the legacy SSE api. I couldn’t get it working with the more modern
    /mcp streamable API. 3002 is the port of your MCP server that Open Hands will connect to.

    If you’re having trouble with this, here’s my settings.json, pretty printed using jq . ~/.openhands-state/settings.json,
    for reference:

    {
      "language": "en",
      "agent": "CodeActAgent",
      "max_iterations": null,
      "security_analyzer": null,
      "confirmation_mode": false,
      "llm_model": "deepseek/Deepseek-V3-0324",
      "llm_api_key": "larry",
      "llm_base_url": "http://larry:11434/v1",
      "remote_runtime_resource_factor": 1,
      "secrets_store": {
        "provider_tokens": {}
      },
      "enable_default_condenser": true,
      "enable_sound_notifications": true,
      "enable_proactive_conversation_starters": true,
      "user_consents_to_analytics": false,
      "sandbox_base_container_image": null,
      "sandbox_runtime_container_image": null,
      "mcp_config": {
        "sse_servers": [
          {
            "url": "http://host.docker.internal:3002/sse",
            "api_key": null
          }
        ],
        "stdio_servers": []
      }
    }
  5. Start a chat. Back in the terminal, you should see:

    19:55:11 - openhands:INFO: base.py:344 - In workspace mount mode, not initializing a new git repository.
    19:55:11 - openhands:INFO: utils.py:52 - Initializing MCP agent for url='http://host.docker.internal:3002/sse' api_key='******' with SSE connection...
    19:55:11 - openhands:INFO: client.py:90 - Connected to server with tools: ['patch', 'read_file']
    

    If you click the three vertical dots in the lower right next to Conversation then click Show Agent Tools & Metadata
    you should see the patch and read_file tools.

Running the server without docker (Not recommended! Dangerous!)

WARNING: If you do this your AI has the ability to modify anything on your machine that
you have the ability to modify! This is super dangerous! STRONGLY recommend using docker
method above, instead.

npm run dev

Tests

First, make sure the server isn’t running. Then:

npm test

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers