- Explore MCP Servers
- polyllm
Polyllm
What is Polyllm
PolyLLM is a unified Go interface that allows users to interact with over 100 Large Language Model (LLM) APIs in an OpenAI-compatible format, facilitating easy integration and switching between different LLM providers.
Use cases
Use cases for PolyLLM include developing chat applications, integrating multiple LLM services in a single application, conducting AI research with different models, and creating tools for content generation.
How to use
To use PolyLLM, you can install it as a library, utilize the CLI tool, or run it as an HTTP server. Configuration can be done through a JSON file, and you can make API calls directly or through the CLI.
Key features
Key features include a single interface for multiple LLM providers, provider agnosticism for easy switching, access through Go API, CLI tool, or HTTP server, built-in support for Model Context Protocol (MCP), and configuration via JSON files.
Where to use
PolyLLM can be used in various fields such as software development, AI research, chatbots, content generation, and any application that requires natural language processing capabilities.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Polyllm
PolyLLM is a unified Go interface that allows users to interact with over 100 Large Language Model (LLM) APIs in an OpenAI-compatible format, facilitating easy integration and switching between different LLM providers.
Use cases
Use cases for PolyLLM include developing chat applications, integrating multiple LLM services in a single application, conducting AI research with different models, and creating tools for content generation.
How to use
To use PolyLLM, you can install it as a library, utilize the CLI tool, or run it as an HTTP server. Configuration can be done through a JSON file, and you can make API calls directly or through the CLI.
Key features
Key features include a single interface for multiple LLM providers, provider agnosticism for easy switching, access through Go API, CLI tool, or HTTP server, built-in support for Model Context Protocol (MCP), and configuration via JSON files.
Where to use
PolyLLM can be used in various fields such as software development, AI research, chatbots, content generation, and any application that requires natural language processing capabilities.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
PolyLLM
PolyLLM is a unified Go interface for multiple Large Language Model (LLM) providers. It allows you to interact with various LLM APIs through a single, consistent interface, making it easy to switch between different providers or use multiple providers in the same application.
Table of Contents
Features
- Single Interface: Interact with multiple LLM providers through a unified API
- Provider Agnostic: Easily switch between providers without changing your code
- Multiple Interfaces: Access LLMs through a Go API, CLI tool, or HTTP server
- MCP Support: Builtin support for Model Context Protocol
- Configuration File: JSON-based configuration for providers and MCP tools
Supported Providers
PolyLLM currently supports the following LLM providers:
- OpenAI
- DeepSeek
- Qwen (Alibaba Cloud)
- Gemini (Google)
- OpenRouter
- Volcengine
- Groq
- Xai
- Siliconflow
Additional providers can be easily added.
Installation
Library
go get github.com/woefuloutco/polyllm
CLI Tool
go install github.com/woefuloutco/polyllm/cmd/polyllm-cli@latest
# use docker
docker pull ghcr.io/recally-io/polyllm-cli:latest
HTTP Server
go install github.com/woefuloutco/polyllm/cmd/polyllm-server@latest
# use docker
docker pull ghcr.io/recally-io/polyllm-server:latest
Configuration
JSON Configuration File
PolyLLM supports configuration via a JSON file. This allows you to define LLM providers and MCP tools.
Example configuration file:
{
"llms": [
{
"name": "gemini",
"type": "gemini",
"base_url": "https://generativelanguage.googleapis.com/v1beta/openai",
"env_prefix": "GEMINI_",
"api_key": "<GOOGLE_API_KEY>",
"models": [
{
"id": "gemini-2.0-flash"
},
{
"id": "gemini-2.0-flash-lite"
},
{
"id": "gemini-1.5-flash"
},
{
"id": "gemini-1.5-flash-8b"
},
{
"id": "gemini-1.5-pro"
},
{
"id": "text-embedding-004"
}
]
}
],
"mcps": {
"fetch": {
"command": "uvx",
"args": [
"mcp-server-fetch"
]
},
"puppeteer": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--init",
"-e",
"DOCKER_CONTAINER=true",
"mcp/puppeteer"
]
}
}
}
MCP Configuration
Model Context Protocol (MCP) tools can be defined in the configuration file under the mcps section. Each tool is specified with a command and arguments.
To use MCP tools with a model, append ?mcp=<tool1>,<tool2> to the model name or use ?mcp=all to enable all configured MCP tools.
Usage
API Usage
Basic Example
package main
import (
"context"
"fmt"
"os"
"github.com/woefuloutco/polyllm"
)
func main() {
// Create a new PolyLLM instance
llm := polyllm.New()
// Generate text using OpenAI's GPT-3.5 Turbo
// Make sure OPENAI_API_KEY environment variable is set
response, err := llm.GenerateText(
context.Background(),
"openai/gpt-3.5-turbo",
"Explain quantum computing in simple terms",
)
if err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
fmt.Println(response)
}
Using Configuration File
package main
import (
"context"
"fmt"
"os"
"github.com/woefuloutco/polyllm"
)
func main() {
// Load configuration from file
cfg, err := polyllm.LoadConfig("config.json")
if err != nil {
fmt.Printf("Error loading config: %v\n", err)
os.Exit(1)
}
// Create a new PolyLLM instance with the configuration
llm := polyllm.NewFromConfig(cfg)
// Generate text using a configured model
response, err := llm.GenerateText(
context.Background(),
"gemini/gemini-1.5-pro",
"Explain quantum computing in simple terms",
)
if err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
fmt.Println(response)
}
Chat Completion Example
package main
import (
"context"
"fmt"
"os"
"github.com/woefuloutco/polyllm"
"github.com/woefuloutco/polyllm/llms"
)
func main() {
// Create a new PolyLLM instance
llm := polyllm.New()
// Create a chat completion request
req := llms.ChatCompletionRequest{
Model: "openai/gpt-4",
Messages: []llms.Message{
{
Role: llms.RoleSystem,
Content: "You are a helpful assistant.",
},
{
Role: llms.RoleUser,
Content: "What are the key features of Go programming language?",
},
},
}
// Stream the response
llm.ChatCompletion(context.Background(), req, func(resp llms.StreamingChatCompletionResponse) {
if resp.Err != nil {
fmt.Printf("Error: %v\n", resp.Err)
os.Exit(1)
}
if len(resp.Choices) > 0 && resp.Choices[0].Delta.Content != "" {
fmt.Print(resp.Choices[0].Delta.Content)
}
})
}
Using MCP
package main
import (
"context"
"fmt"
"os"
"github.com/woefuloutco/polyllm"
"github.com/woefuloutco/polyllm/llms"
)
func main() {
// Load configuration from file
cfg, err := polyllm.LoadConfig("config.json")
if err != nil {
fmt.Printf("Error loading config: %v\n", err)
os.Exit(1)
}
// Create a new PolyLLM instance with the configuration
llm := polyllm.NewFromConfig(cfg)
// Create a chat completion request with MCP enabled
// Use all configured MCP tools
req := llms.ChatCompletionRequest{
Model: "qwen/qwen-max?mcp=all", // Use all MCP tools
Messages: []llms.Message{
{
Role: llms.RoleUser,
Content: "List the top 10 news from Hacker News",
},
},
}
// Or specify specific MCP tools
req2 := llms.ChatCompletionRequest{
Model: "qwen/qwen-max?mcp=fetch,puppeteer", // Use specific MCP tools
Messages: []llms.Message{
{
Role: llms.RoleUser,
Content: "List the top 10 news from Hacker News",
},
},
}
// Stream the response
llm.ChatCompletion(context.Background(), req, func(resp llms.StreamingChatCompletionResponse) {
if resp.Err != nil {
fmt.Printf("Error: %v\n", resp.Err)
os.Exit(1)
}
if len(resp.Choices) > 0 && resp.Choices[0].Delta.Content != "" {
fmt.Print(resp.Choices[0].Delta.Content)
}
})
}
CLI Usage
Installation
# Install the CLI
go install github.com/woefuloutco/polyllm/cmd/polyllm-cli@latest
Examples
# Set your API key
export OPENAI_API_KEY=your_api_key
# or use docker
alias polyllm-cli="docker run --rm -e OPENAI_API_KEY=your_api_key ghcr.io/recally-io/polyllm-cli:latest"
# show help
polyllm-cli --help
# List available models
polyllm-cli models
# Generate text
polyllm-cli -m "openai/gpt-3.5-turbo" "Tell me a joke about programming"
# Use a different model
polyllm-cli -m "deepseek/deepseek-chat" "What is the meaning of life?"
# Using a config file
polyllm-cli -c "config.json" -m "gemini/gemini-1.5-pro" "What is quantum computing?"
# Using MCP with all tools
polyllm-cli -c "config.json" -m "qwen/qwen-max?mcp=all" "Top 10 news in hackernews"
# Using MCP with specific tools
polyllm-cli -c "config.json" -m "qwen/qwen-max?mcp=fetch,puppeteer" "Top 10 news in hackernews"
HTTP Server
Installation
# Install the server
go install github.com/woefuloutco/polyllm/cmd/polyllm-server@latest
Starting the Server
# Start the server on default port 8088
export OPENAI_API_KEY=your_api_key
polyllm-server
# or use docker
docker run --rm -e OPENAI_API_KEY=your_api_key -p 8088:8088 ghcr.io/recally-io/polyllm-server:latest
# Start with a configuration file
polyllm-server -c config.json
# Or specify a custom port
PORT=3000 polyllm-server
# Add API key authentication
API_KEY=your_api_key polyllm-server
API Endpoints
The server provides OpenAI-compatible endpoints:
GET /modelsorGET /v1/models- List all available modelsPOST /chat/completionsorPOST /v1/chat/completions- Create a chat completion
Example Request
# In a terminal, make a request
curl -X POST http://localhost:8088/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'
# Request with MCP enabled
curl -X POST http://localhost:8088/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen/qwen-max?mcp=all",
"messages": [
{"role": "user", "content": "Top 10 news in hackernews"}
]
}'
License
This project is licensed under the terms provided in the LICENSE file.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










