- Explore MCP Servers
- LLM_tools
Llm Tools
What is Llm Tools
LLM_tools is a collection of A2A and MCP scripts designed for utilizing Large Language Models (LLMs) within the Starship Bridge development system, enabling efficient software development workflows.
Use cases
Use cases include code migration, debugging, deployment, API integration, and any situation where automated assistance in software development is beneficial.
How to use
To use LLM_tools, set up your local environment by configuring the .env file, then run the Orchestration Assistant Client to interact with the MCP Server and execute tasks based on your high-level goals.
Key features
Key features include a Python client for task orchestration, a secure MCP Server providing sandboxed tools, and a library for seamless integration of LLM capabilities into software development processes.
Where to use
LLM_tools can be used in various fields such as software development, project management, and automation, particularly in scenarios requiring complex task management and integration of LLMs.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Llm Tools
LLM_tools is a collection of A2A and MCP scripts designed for utilizing Large Language Models (LLMs) within the Starship Bridge development system, enabling efficient software development workflows.
Use cases
Use cases include code migration, debugging, deployment, API integration, and any situation where automated assistance in software development is beneficial.
How to use
To use LLM_tools, set up your local environment by configuring the .env file, then run the Orchestration Assistant Client to interact with the MCP Server and execute tasks based on your high-level goals.
Key features
Key features include a Python client for task orchestration, a secure MCP Server providing sandboxed tools, and a library for seamless integration of LLM capabilities into software development processes.
Where to use
LLM_tools can be used in various fields such as software development, project management, and automation, particularly in scenarios requiring complex task management and integration of LLMs.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Starship Bridge - Agentic Development System (LLM_tools Root)
Overview
Welcome to the LLM_tools
project, Captain! This repository contains the components for Starship Bridge, an experimental agentic software development system.
The core system resides within the Anthropic/MCP_Sandbox
directory and consists of:
- Orchestration Assistant Client (
orchestrator/client.py
): A Python client that interacts with you (the Captain) and a Large Language Model (like Google’s Gemini). It plans tasks based on your high-level goals and uses the MCP Server’s tools for execution. - MCP Server (
starship-bridge-mcp-agent/
): A secure Python backend built withmcp-sdk
. It provides specific, sandboxed tools (file system, git, build/deploy, etc.) over the MCP protocol. All operations are restricted to designated workspace subdirectories within a configured sandbox path. - MCP Library (
Anthropic/python-sdk/
): The source code for themcp-sdk
library used by both the client and server.
The goal is to enable the Orchestrator to manage complex software development workflows (e.g., code migration, debugging, deployment, API integration) safely and effectively by leveraging the specialized tools provided by the MCP Server.
Project Structure Highlights
LLM_tools/ ├── Anthropic/ │ ├── MCP_Sandbox/ # <-- Primary working directory │ │ ├── .env.example # Example environment variables │ │ ├── .env # **Your local config (MUST CREATE & CONFIGURE)** │ │ ├── orchestrator/ # Orchestration Assistant Client code │ │ │ ├── client.py # Main client script │ │ │ ├── agents/ # Agent definitions (e.g., McKinsey Solver) │ │ │ └── prompt_generators/ # Prompt generation utilities │ │ ├── starship-bridge-mcp-agent/ # MCP Server code │ │ │ ├── mcp_server/ # Server package │ │ │ │ ├── main.py # Server entrypoint │ │ │ │ ├── config.py # Config loading │ │ │ │ ├── core/ # Security, workspace management │ │ │ │ └── tools/ # Tool implementations (FS, Git, etc.) │ │ │ └── pyproject.toml # Server-specific metadata (if needed) │ │ └── ws_.../ # Dynamically created workspace directories │ ├── python-sdk/ # MCP Library source code │ └── ... ├── requirements.txt # Python dependencies for the project ├── README.md # This file └── ...
Prerequisites
- Python: 3.10+
- uv: Modern Python package installer and runner (
pip install uv
or follow official instructions). Used for installing dependencies and running the server/client efficiently. - Git: Command-line tool required for
git
tools on the MCP Server. - Docker: Recommended, required for certain tools like
sam build --use-container
. Ensure Docker Desktop/Engine is running. - AWS CLI (Optional but Recommended): For manual AWS checks and needed if
sam
oraws
tools are used. Configure with your credentials (aws configure
). - AWS SAM CLI (Optional): Required for
sam build
/sam deploy
tools. - API Keys:
- Google Gemini API Key
- AWS Credentials (accessible via environment or configured CLI profile)
- Stripe API Keys (if using Stripe tools)
- (Potentially others for Vercel, etc.)
Quickstart Setup
-
Clone the Repository:
git clone <your-repo-url> LLM_tools cd LLM_tools
-
Install
uv
:
If you don’t haveuv
installed:pip install uv # Or follow official uv install guide: https://github.com/astral-sh/uv
-
Navigate to Sandbox:
All subsequent commands should typically be run from this directory:cd Anthropic/MCP_Sandbox
-
Configure Environment (
.env
):- Copy the example:
cp .env.example .env
- CRITICAL: Edit the
.env
file:- Set
DIRECTORY_SANDBOX
: Provide a valid, absolute path on your machine where the MCP Server will create workspaces and operate. This directory must exist or be creatable by the server process. E.g.,DIRECTORY_SANDBOX=/Users/YourUser/Workspaces/StarshipBridgeSandbox
. Ensure this location is secure and dedicated. - Add your
GEMINI_API_KEY
. - Add
AWS_REGION
,AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
(or ensure your environment/AWS profile is configured correctly for Boto3/SAM). - Add
STRIPE_SECRET_KEY
,STRIPE_PUBLISHABLE_KEY
if needed.
- Set
- DO NOT COMMIT your
.env
file!
- Copy the example:
-
Install Dependencies:
From theLLM_tools
root directory (whererequirements.txt
is):# Use uv to create a virtual environment and install dependencies uv venv # Creates .venv directory uv pip sync requirements.txt # Installs dependencies from requirements.txt into .venv source .venv/bin/activate # Activate the virtual environment (Linux/macOS) # For Windows: .venv\Scripts\activate
Alternatively, if you prefer running directly without explicitly activating:
# From LLM_tools directory: uv pip sync requirements.txt
(But using a venv is recommended)
Running the System
You’ll need two separate terminal windows/tabs, both navigated to the LLM_tools/Anthropic/MCP_Sandbox
directory and with the virtual environment activated (if you created one).
Terminal 1: Start the MCP Server
# Ensure you are in LLM_tools/Anthropic/MCP_Sandbox
# Run using uv (it will find the code inside starship-bridge-mcp-agent)
uv run python ./starship-bridge-mcp-agent/mcp_server/main.py
The server will start, log its initialization (including the Sandbox Directory validation), and wait for a client connection via stdio.
Terminal 2: Start the Orchestrator Client
# Ensure you are in LLM_tools/Anthropic/MCP_Sandbox
# Run the client script
python ./orchestrator/client.py
The client will attempt to start and connect to the MCP server process it spins up. You should see connection logs, followed by the “Captain Problem:” prompt.
Basic Workflow
- Enter a high-level goal or problem at the “Captain Problem:” prompt in the Orchestrator Client terminal.
- The Orchestrator uses its prompt generators and the McKinsey Solver Agent (via Gemini) to create an initial plan.
- The Orchestrator should request to use the
create_workspace
tool first. - It will then proceed to call other MCP tools (like
git_clone
,write_file
,sam_build
, etc.) based on its plan. - Tool execution happens via the MCP Server (Terminal 1), operating within the created workspace inside the
DIRECTORY_SANDBOX
. - Results and status are reported back to the Orchestrator (Terminal 2), which interacts with Gemini to continue the process or report back to you.
Security
- The
DIRECTORY_SANDBOX
is the cornerstone of security. All tool operations are confined within workspaces under this path. Choose this location carefully. - API Keys and sensitive credentials should only be stored in the
.env
file (which is gitignored) and accessed by the MCP Server configuration loader. Do not pass secrets directly in tool parameters. - Review the agent’s plans and tool outputs (especially
git diff
results) before approving destructive or external actions likegit push
orsam deploy
.
License
(Assumed MIT based on example file - Replace if different)
Licensed under the MIT License. See LICENSE file for details.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.