- Explore MCP Servers
- boom2
Boom2
What is Boom2
Boom2 is an autonomous coding agent that operates within a Docker container, providing AI-powered coding assistance through Model Context Protocol (MCP) servers. It supports various large language model (LLM) providers, including OpenAI, Ollama for local models, and Anthropic.
Use cases
Use cases for Boom2 include automating repetitive coding tasks, generating code snippets based on user queries, providing debugging assistance, and facilitating interactive coding sessions in various programming languages.
How to use
To use Boom2, first ensure Docker is installed. Clone the repository, build the Docker image, and run it in your project directory. On the first run, Boom2 will assist in setting up your preferred LLM configuration. You can also enable verbose mode for detailed logs.
Key features
Key features of Boom2 include Docker-based deployment, filesystem access for reading/writing project files, memory for persistent conversation context, shell execution for running commands, support for multiple LLMs, an interactive CLI for coding assistance, and persistent configuration storage.
Where to use
Boom2 can be used in software development environments where AI coding assistance is beneficial, such as in web development, application development, and any project requiring code generation or debugging support.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Boom2
Boom2 is an autonomous coding agent that operates within a Docker container, providing AI-powered coding assistance through Model Context Protocol (MCP) servers. It supports various large language model (LLM) providers, including OpenAI, Ollama for local models, and Anthropic.
Use cases
Use cases for Boom2 include automating repetitive coding tasks, generating code snippets based on user queries, providing debugging assistance, and facilitating interactive coding sessions in various programming languages.
How to use
To use Boom2, first ensure Docker is installed. Clone the repository, build the Docker image, and run it in your project directory. On the first run, Boom2 will assist in setting up your preferred LLM configuration. You can also enable verbose mode for detailed logs.
Key features
Key features of Boom2 include Docker-based deployment, filesystem access for reading/writing project files, memory for persistent conversation context, shell execution for running commands, support for multiple LLMs, an interactive CLI for coding assistance, and persistent configuration storage.
Where to use
Boom2 can be used in software development environments where AI coding assistance is beneficial, such as in web development, application development, and any project requiring code generation or debugging support.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Boom2 - Autonomous Coding Agent
Boom2 is an autonomous coding agent that runs in a Docker container and provides access to AI-powered coding assistance through Model Context Protocol (MCP) servers. It supports multiple LLM providers including OpenAI, Ollama (for local models), and Anthropic.
Features
- Docker-based: Run a single command to start the agent in a containerized environment
- MCP Servers:
- Filesystem access (read/write files in your project)
- Memory (persistent conversation context)
- Shell execution (run commands in the container)
- Multiple LLM Support:
- OpenAI (GPT-4, GPT-3.5)
- Ollama (local models like Llama2)
- Anthropic (Claude models)
- Interactive CLI: Simple chat-based interface for coding assistance
- Persistent Configuration: Settings stored in
.boom2.jsonand memory in.boom2/memory-graph.json
Prerequisites
- Docker installed on your system
- For local models: Ollama running on your host machine
Quick Start
Build the Docker Image
# Clone the repository
git clone https://github.com/your-username/boom2.git
cd boom2
# Build the Docker image
docker build -t boom2 .
Run Boom2 in Your Project
Navigate to your project directory and run:
docker run -it --rm \
-v $(pwd):/home/node/project \
-w /home/node/project \
boom2
On first run, Boom2 will guide you through setting up your preferred LLM configuration.
Enabling Verbose Mode
If you want to see detailed logs of tool execution and LLM interactions:
docker run -it --rm \
-v $(pwd):/home/node/project \
-w /home/node/project \
boom2 start --verbose
When running in verbose mode, logs will also be saved to .boom2/logs/<datetime>.log in your project directory.
Configuration
Boom2 looks for a .boom2.json file in your project directory. If none exists, it will prompt you to create one on first run.
Example configuration:
{
"llm": {
"provider": "openai",
"apiKey": "sk-your-api-key",
"model": "gpt-4"
},
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
],
"env": {
"DATA_PATH": ".boom2/memory-graph.json"
}
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/home/node/project"
]
},
"shell": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-shell-exec"
],
"env": {
"ALLOWED_COMMANDS": "npm,node,python,pip"
}
}
},
"verbose": false
}
MCP Server Communication
Boom2 now uses stdio-based communication with MCP servers, which is the native transport protocol for most MCP servers. This provides better compatibility with the MCP SDK and ensures reliable operation across different environments.
Memory Persistence
The Memory MCP server’s data is automatically persisted to .boom2/memory-graph.json in your project directory. This ensures that conversations and context are maintained between container restarts.
Using Boom2
Once running, you’ll see a prompt where you can ask coding-related questions or give commands:
boom2> Help me understand the structure of this project
Boom2 will use the configured LLM to understand your request and leverage the MCP servers to:
- Read and analyze your codebase
- Modify or create files when needed
- Run shell commands for tasks like installing dependencies
- Remember context from your conversation
Using with Ollama
To use Boom2 with local Ollama models:
- Install and run Ollama on your host machine
- Make sure your Ollama API is accessible from Docker
- When configuring Boom2, select Ollama as the provider
Connecting Docker to Host’s Ollama Instance
When running in Docker, localhost refers to the container itself, not your host machine. Boom2 uses host.docker.internal by default, which works on Docker for Mac and Windows without any configuration.
docker run -it --rm \
-v $(pwd):/home/node/project \
-w /home/node/project \
boom2
The default Ollama API URL will be http://host.docker.internal:11434, which should connect to your host machine’s Ollama instance automatically on Mac and Windows.
For Linux Users
On Linux, host.docker.internal might not work by default. You have two options:
-
Use host network mode:
docker run -it --rm \ -v $(pwd):/home/node/project \ -w /home/node/project \ --network host \ boom2When prompted, change the Ollama API URL to
http://localhost:11434 -
Add host.docker.internal manually:
docker run -it --rm \ -v $(pwd):/home/node/project \ -w /home/node/project \ --add-host=host.docker.internal:host-gateway \ boom2This adds the host.docker.internal DNS name to the container, making it work like on Mac/Windows.
-
Use your host’s IP address:
# Find your host IP address ip addr show | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' # Then use that IP when prompted, e.g.: # http://192.168.1.100:11434
Ollama Configuration Options
You can customize how Boom2 interacts with Ollama in your .boom2.conf:
OpenAI Compatibility Mode
Ollama now supports OpenAI’s function calling API with compatible models. Enable this with:
{
"llm": {
"provider": "ollama",
"model": "llama3.1",
"baseUrl": "http://host.docker.internal:11434/v1",
"useOpenAICompatibility": true
}
}
This mode works best with:
- Llama 3.1 and other models that support OpenAI’s tool/function calling protocol
- Requires using
/v1in the baseUrl to access Ollama’s OpenAI-compatible endpoint - Provides more reliable tool usage than the standard prompt-based approach
Development
Project Structure
boom2/ ├─ Dockerfile ├─ package.json ├─ tsconfig.json ├─ src/ │ ├─ cli/ │ │ ├─ cli.ts # Entry point for the interactive CLI │ │ └─ config.ts # Configuration management │ ├─ mcp/ │ │ ├─ servers.ts # MCP server management │ │ ├─ mcpRegistry.ts # Registry of available MCP servers │ │ ├─ mcpClient.ts # Client for interacting with MCP servers │ │ └─ shellExec.ts # Custom shell execution MCP server │ ├─ llm/ │ │ ├─ llmAdapter.ts # Common interface for LLM providers │ │ ├─ openAiAdapter.ts │ │ ├─ ollamaAdapter.ts │ │ └─ anthropicAdapter.ts │ └─ agent/ │ └─ agentController.ts # Orchestrates conversation & decides tool usage ├─ bin/ │ └─ boom2.js # CLI entry point
Building from Source
# Install dependencies
npm install
# Build the TypeScript code
npm run build
Note: The compiled TypeScript code is output directly to the dist directory (not dist/src), as configured in tsconfig.json. References to compiled files should use paths like dist/cli/cli.js rather than dist/src/cli/cli.js.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. Check our ROADMAP.md file for planned features and development priorities.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Troubleshooting
If you encounter issues, try these steps:
- Check if the
.boom2.conffile is correctly configured. - Verify that Docker has sufficient permissions to access your project directory.
- For Ollama connection issues, make sure Ollama is running on your host and accessible from Docker.
Common issues and solutions:
“No adapter registered for provider: ollama”
- This usually means that the LLM adapters weren’t properly loaded.
- Verify that you’re using the latest version of boom2.
MCP server startup issues
- MCP servers run as child processes within the container using stdio transport
- If you’re seeing startup issues, try the following:
- Delete any existing
.boom2.jsonfile and let the container create a new one - Check the verbose logs to see detailed communication between boom2 and the MCP servers
- Try running with
--rmto ensure you’re starting with a clean container each time - For advanced troubleshooting, run with
--verboseflag to see detailed logs
- Delete any existing
Incorrect paths in Docker
- Remember that the container maps your current directory to
/home/node/project. - All paths inside the container should be relative to this location.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










