- Explore MCP Servers
- mcp-cli-open
Mcp Cli Open
What is Mcp Cli Open
The Model Context Provider CLI is a command-line interface designed for protocol-level communication with a Model Context Provider server. It enables users to send commands, query data, and interact with various resources provided by the server.
Use cases
This CLI can be used for a variety of applications, including dynamic tool and resource exploration within AI models. It supports multiple providers and models for tasks such as natural language processing, chat interactions, and data querying, making it versatile for both development and research purposes.
How to use
Start the client by running the command ‘uv run main.py --server sqlite’. You can specify optional parameters such as ‘–config-file’, ‘–provider’, and ‘–model’ to customize your environment. Once running, enter interactive mode to execute commands like ‘ping’, ‘list-tools’, or ‘chat’ for real-time interaction.
Key features
Key features of the CLI include protocol-level communication with the Model Context Provider, support for multiple providers (OpenAI and Ollama), dynamic tool/resource exploration, an interactive command interface, and model selection with default options for each provider.
Where to use
This CLI can be utilized in environments where AI model interaction is necessary, such as development setups, research projects, or production environments requiring dynamic model querying and resource management. It’s suitable for developers, data scientists, and researchers engaging with AI models.
Overview
What is Mcp Cli Open
The Model Context Provider CLI is a command-line interface designed for protocol-level communication with a Model Context Provider server. It enables users to send commands, query data, and interact with various resources provided by the server.
Use cases
This CLI can be used for a variety of applications, including dynamic tool and resource exploration within AI models. It supports multiple providers and models for tasks such as natural language processing, chat interactions, and data querying, making it versatile for both development and research purposes.
How to use
Start the client by running the command ‘uv run main.py --server sqlite’. You can specify optional parameters such as ‘–config-file’, ‘–provider’, and ‘–model’ to customize your environment. Once running, enter interactive mode to execute commands like ‘ping’, ‘list-tools’, or ‘chat’ for real-time interaction.
Key features
Key features of the CLI include protocol-level communication with the Model Context Provider, support for multiple providers (OpenAI and Ollama), dynamic tool/resource exploration, an interactive command interface, and model selection with default options for each provider.
Where to use
This CLI can be utilized in environments where AI model interaction is necessary, such as development setups, research projects, or production environments requiring dynamic model querying and resource management. It’s suitable for developers, data scientists, and researchers engaging with AI models.
Content
Model Context Provider CLI
This repository contains a protocol-level CLI designed to interact with a Model Context Provider server. The client allows users to send commands, query data, and interact with various resources provided by the server.
Features
- Protocol-level communication with the Model Context Provider.
- Dynamic tool and resource exploration.
- Support for multiple providers and models:
- Providers: OpenAI, Ollama.
- Default models:
gpt-4o-mini
for OpenAI,qwen2.5-coder
for Ollama.
Prerequisites
- Python 3.8 or higher.
- Required dependencies (see Installation)
Installation
- Clone the repository:
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
- Install UV:
pip install uv
- Resynchronize dependencies:
uv sync --reinstall
Usage
To start the client and interact with the SQLite server, run the following command:
uv run main.py --server sqlite
Command-line Arguments
--server
: Specifies the server configuration to use. Required.--config-file
: (Optional) Path to the JSON configuration file. Defaults toserver_config.json
.--provider
: (Optional) Specifies the provider to use (openai
orollama
). Defaults toopenai
.--model
: (Optional) Specifies the model to use. Defaults depend on the provider:gpt-4o-mini
for OpenAI.llama3.2
for Ollama.
Examples
Run the client with the default OpenAI provider and model:
uv run main.py --server sqlite
Run the client with a specific configuration and Ollama provider:
uv run main.py --server sqlite --provider ollama --model llama3.2
Interactive Mode
The client supports interactive mode, allowing you to execute commands dynamically. Type help
for a list of available commands or quit
to exit the program.
Supported Commands
ping
: Check if the server is responsive.list-tools
: Display available tools.list-resources
: Display available resources.list-prompts
: Display available prompts.chat
: Enter interactive chat mode.clear
: Clear the terminal screen.help
: Show a list of supported commands.quit
/exit
: Exit the client.
Chat Mode
To enter chat mode and interact with the server:
uv run main.py --server sqlite
In chat mode, you can use tools and query the server interactively. The provider and model used are specified during startup and displayed as follows:
Entering chat mode using provider ‘ollama’ and model ‘llama3.2’…
Using OpenAI Provider:
If you wish to use openai models, you should
- set the
OPENAI_API_KEY
environment variable before running the client, either in .env or as an environment variable.
Contributing
Contributions are welcome! Please open an issue or submit a pull request with your proposed changes.
License
This project is licensed under the MIT License.