- Explore MCP Servers
- locust-mcp-server
Locust Mcp Server
What is Locust Mcp Server
locust-mcp-server is an implementation of a Model Context Protocol (MCP) server designed for running Locust load tests. It facilitates the integration of Locust’s load testing capabilities with AI-driven development environments.
Use cases
Use cases for locust-mcp-server include conducting performance tests for web applications, simulating user interactions for stress testing, and integrating load testing into CI/CD pipelines to ensure software quality.
How to use
To use locust-mcp-server, clone the repository, install the required dependencies using the uv package manager, and set up optional environment variables. Create a Locust test script to define your load testing scenarios and configure the MCP server accordingly.
Key features
Key features include simple integration with the Model Context Protocol framework, support for both headless and UI modes, configurable test parameters (users, spawn rate, runtime), an easy-to-use API for running tests, real-time test execution output, built-in HTTP/HTTPS protocol support, and custom task scenario capabilities.
Where to use
locust-mcp-server can be used in various fields such as software development, performance testing, and DevOps environments where load testing is essential for ensuring application performance and reliability.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Locust Mcp Server
locust-mcp-server is an implementation of a Model Context Protocol (MCP) server designed for running Locust load tests. It facilitates the integration of Locust’s load testing capabilities with AI-driven development environments.
Use cases
Use cases for locust-mcp-server include conducting performance tests for web applications, simulating user interactions for stress testing, and integrating load testing into CI/CD pipelines to ensure software quality.
How to use
To use locust-mcp-server, clone the repository, install the required dependencies using the uv package manager, and set up optional environment variables. Create a Locust test script to define your load testing scenarios and configure the MCP server accordingly.
Key features
Key features include simple integration with the Model Context Protocol framework, support for both headless and UI modes, configurable test parameters (users, spawn rate, runtime), an easy-to-use API for running tests, real-time test execution output, built-in HTTP/HTTPS protocol support, and custom task scenario capabilities.
Where to use
locust-mcp-server can be used in various fields such as software development, performance testing, and DevOps environments where load testing is essential for ensuring application performance and reliability.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
🚀 ⚡️ locust-mcp-server
A Model Context Protocol (MCP) server implementation for running Locust load tests. This server enables seamless integration of Locust load testing capabilities with AI-powered development environments.
✨ Features
- Simple integration with Model Context Protocol framework
- Support for headless and UI modes
- Configurable test parameters (users, spawn rate, runtime)
- Easy-to-use API for running Locust load tests
- Real-time test execution output
- HTTP/HTTPS protocol support out of the box
- Custom task scenarios support

🔧 Prerequisites
Before you begin, ensure you have the following installed:
- Python 3.13 or higher
- uv package manager (Installation guide)
📦 Installation
- Clone the repository:
git clone https://github.com/qainsights/locust-mcp-server.git
- Install the required dependencies:
uv pip install -r requirements.txt
- Set up environment variables (optional):
Create a.envfile in the project root:
LOCUST_HOST=http://localhost:8089 # Default host for your tests
LOCUST_USERS=3 # Default number of users
LOCUST_SPAWN_RATE=1 # Default user spawn rate
LOCUST_RUN_TIME=10s # Default test duration
🚀 Getting Started
- Create a Locust test script (e.g.,
hello.py):
from locust import HttpUser, task, between
class QuickstartUser(HttpUser):
wait_time = between(1, 5)
@task
def hello_world(self):
self.client.get("/hello")
self.client.get("/world")
@task(3)
def view_items(self):
for item_id in range(10):
self.client.get(f"/item?id={item_id}", name="/item")
time.sleep(1)
def on_start(self):
self.client.post("/login", json={"username":"foo", "password":"bar"})
- Configure the MCP server using the below specs in your favorite MCP client (Claude Desktop, Cursor, Windsurf and more):
{
"mcpServers": {
"locust": {
"command": "/Users/naveenkumar/.local/bin/uv",
"args": [
"--directory",
"/Users/naveenkumar/Gits/locust-mcp-server",
"run",
"locust_server.py"
]
}
}
}
- Now ask the LLM to run the test e.g.
run locust test for hello.py. The Locust MCP server will use the following tool to start the test:
run_locust: Run a test with configurable options for headless mode, host, runtime, users, and spawn rate
📝 API Reference
Run Locust Test
run_locust(
test_file: str,
headless: bool = True,
host: str = "http://localhost:8089",
runtime: str = "10s",
users: int = 3,
spawn_rate: int = 1
)
Parameters:
test_file: Path to your Locust test scriptheadless: Run in headless mode (True) or with UI (False)host: Target host to load testruntime: Test duration (e.g., “30s”, “1m”, “5m”)users: Number of concurrent users to simulatespawn_rate: Rate at which users are spawned
✨ Use Cases
- LLM powered results analysis
- Effective debugging with the help of LLM
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










