MCP ExplorerExplorer

Mcp Experts

@tomsiwikon 9 months ago
3 MIT
FreeCommunity
AI Systems
Let an expert review your code changes

Overview

What is Mcp Experts

mcp-experts is a Python-based code review system that utilizes the Model Context Protocol (MCP) to provide code review capabilities through simulated expert personas, such as Martin Fowler and Robert C. Martin (Uncle Bob).

Use cases

Use cases for mcp-experts include conducting code reviews in collaborative software projects, providing feedback on code changes, and enhancing learning experiences for new developers by simulating expert advice.

How to use

To use mcp-experts, install the required dependencies, set up the Ollama server for AI-powered reviews, and run the server in standard or HTTP/SSE mode. You can also integrate it with the Cursor IDE for enhanced functionality.

Key features

Key features include code reviews based on Martin Fowler’s refactoring principles, Robert C. Martin’s Clean Code principles, knowledge graph storage of code and reviews, integration with Ollama for AI-powered reviews, and support for Server-side Event (SSE) for web integration.

Where to use

mcp-experts can be used in software development environments where code quality and best practices are critical, such as in agile development teams, code review processes, and educational settings for teaching coding standards.

Content

MCP Code Expert System

A Python-based code review system using the Model Context Protocol (MCP). It provides code review capabilities through simulated expert personas like Martin Fowler and Robert C. Martin (Uncle Bob).

Features

  • Code review based on Martin Fowler’s refactoring principles
  • Code review based on Robert C. Martin’s Clean Code principles
  • Knowledge graph storage of code, reviews, and relationships
  • Integration with Ollama for AI-powered reviews
  • Server-side Event (SSE) support for web integration

Prerequisites

Python 3.10+

This project requires Python 3.10 or higher.

Ollama

Ollama is required for AI-powered code reviews.

  1. Install Ollama for your platform:

    • macOS: Download from ollama.com
    • Linux: curl -fsSL https://ollama.com/install.sh | sh
    • Windows: Windows WSL2 support via Linux instructions
  2. Pull a recommended model:

    ollama pull llama3:8b
    
  3. Start the Ollama server:

    ollama serve
    

Installation

Run the setup script to install dependencies and create the virtual environment:

chmod +x setup.sh
./setup.sh

Configuration

Edit the .env file to configure (create from .env.example if needed):

# Knowledge Graph Settings
KNOWLEDGE_GRAPH_PATH=data/knowledge_graph.json

# Ollama Configuration (local AI models)
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3:8b

Usage

Running the Server

Standard Mode (for Cursor Integration)

source .venv/bin/activate
python server.py

HTTP/SSE Mode (for Web Integration)

source .venv/bin/activate
python server.py --transport sse

This will start the server at http://localhost:8000/sse for SSE transport.

For custom port:

python server.py --transport sse --port 9000

Installing in Cursor

To install in Cursor IDE:

source .venv/bin/activate
mcp install server.py --name "Code Expert System"

Available Tools

The server exposes these tools:

  • ask_martin: Ask Martin Fowler to review code and suggest refactorings
  • ask_bob: Ask Robert C. Martin (Uncle Bob) to review code based on Clean Code principles
  • read_graph: Read the entire knowledge graph
  • search_nodes: Search for nodes in the knowledge graph
  • open_nodes: Open specific nodes by their names

Example Usage

To review a code snippet with Martin Fowler:

{
  "code": "function calculateTotal(items) {\n  var total = 0;\n  for (var i = 0; i < items.length; i++) {\n    total += items[i].price;\n  }\n  return total;\n}",
  "language": "javascript",
  "description": "Calculate the total price of items"
}

Project Structure

  • server.py: Main server implementation with MCP integration
  • experts/: Expert modules implementing the code review capabilities
    • __init__.py: Shared models and interfaces
    • martin_fowler/: Martin Fowler expert implementation
    • robert_c_martin/: Robert C. Martin expert implementation
  • knowledge_graph.py: Knowledge graph for storing code and reviews
  • ollama_service.py: Integration with Ollama for AI-powered reviews
  • examples/: Example code for review in different languages
  • requirements.txt: Python dependencies
  • setup.sh: Setup script

Architecture

The system follows a modular architecture:

  1. Server Layer: Handles MCP protocol communication and routes requests
  2. Expert Layer: Encapsulates code review logic for each expert
  3. Service Layer: Provides AI integration and knowledge graph functionality

Each expert implements a standard interface allowing for consistent handling and easy addition of new experts.

License

MIT

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers