- Explore MCP Servers
- blabber-mcp
Blabber Mcp
What is Blabber Mcp
Blabber-MCP is a server designed to give large language models (LLMs) a voice through OpenAI’s Text-to-Speech (TTS) API. It enables users to convert text input into quality audio output and offers a variety of features related to voice and audio settings.
Use cases
Blabber-MCP can be used for creating audio content from written text, enhancing accessibility for visually impaired users, producing voiceovers for videos, personalizing applications with voice interactions, and facilitating language learning through auditory feedback.
How to use
To use Blabber-MCP, users need to set up their MCP client’s configuration file with the OpenAI API key and server settings. Once configured, they can make calls to the text_to_speech tool, providing text, selecting voices, audio formats, and playback options.
Key features
Key features include high-quality TTS conversion, a selection of OpenAI voices, model options for standard and high-definition audio, multiple output formats, the capability to save generated audio files, optional playback, and default voice configuration.
Where to use
Blabber-MCP is suitable for any application that requires voice synthesis, such as interactive chatbots, mobile applications, educational tools, digital assistants, and content creation platforms that can leverage audio output from text.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Blabber Mcp
Blabber-MCP is a server designed to give large language models (LLMs) a voice through OpenAI’s Text-to-Speech (TTS) API. It enables users to convert text input into quality audio output and offers a variety of features related to voice and audio settings.
Use cases
Blabber-MCP can be used for creating audio content from written text, enhancing accessibility for visually impaired users, producing voiceovers for videos, personalizing applications with voice interactions, and facilitating language learning through auditory feedback.
How to use
To use Blabber-MCP, users need to set up their MCP client’s configuration file with the OpenAI API key and server settings. Once configured, they can make calls to the text_to_speech tool, providing text, selecting voices, audio formats, and playback options.
Key features
Key features include high-quality TTS conversion, a selection of OpenAI voices, model options for standard and high-definition audio, multiple output formats, the capability to save generated audio files, optional playback, and default voice configuration.
Where to use
Blabber-MCP is suitable for any application that requires voice synthesis, such as interactive chatbots, mobile applications, educational tools, digital assistants, and content creation platforms that can leverage audio output from text.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
📢 Blabber-MCP 🗣️
An MCP server that gives your LLMs a voice using OpenAI’s Text-to-Speech API! 🔊
✨ Features
- Text-to-Speech: Converts input text into high-quality spoken audio.
- Voice Selection: Choose from various OpenAI voices (
alloy,echo,fable,onyx,nova,shimmer). - Model Selection: Use standard (
tts-1) or high-definition (tts-1-hd) models. - Format Options: Get audio output in
mp3,opus,aac, orflac. - File Saving: Saves the generated audio to a local file.
- Optional Playback: Automatically play the generated audio using a configurable system command.
- Configurable Defaults: Set a default voice via configuration.
🔧 Configuration
To use this server, you need to add its configuration to your MCP client’s settings file (e.g., mcp_settings.json).
- Get OpenAI API Key: You need an API key from OpenAI.
- Add to MCP Settings: Add the following block to the
mcpServersobject in your settings file, replacing"YOUR_OPENAI_API_KEY"with your actual key.
Important: Make sure the args path points to the correct location of the build/index.js file within your blabber-mcp project directory. Use the full absolute path.
🚀 Usage
Once configured and running, you can use the text_to_speech tool via your MCP client.
Tool: text_to_speech
Server: blabber-mcp (or the key you used in the config)
Arguments:
input(string, required): The text to synthesize.voice(string, optional): The voice to use (alloy,echo,fable,onyx,nova,shimmer). Defaults to theDEFAULT_TTS_VOICEset in config, ornova.model(string, optional): The model (tts-1,tts-1-hd). Defaults totts-1.response_format(string, optional): Audio format (mp3,opus,aac,flac). Defaults tomp3.play(boolean, optional): Set totrueto automatically play the audio after saving. Defaults tofalse.
Example Tool Call (with playback):
<use_mcp_tool>
<server_name>blabber-mcp</server_name>
<tool_name>text_to_speech</tool_name>
<arguments>
{
"input": "Hello from Blabber MCP!",
"voice": "shimmer",
"play": true
}
</arguments>
</use_mcp_tool>
Output:
The tool saves the audio file to the output/ directory within the blabber-mcp project folder and returns a JSON response like this:
{
"message": "Audio saved successfully. Playback initiated using command: cvlc",
"filePath": "path/to/speech_1743908694848.mp3",
"format": "mp3",
"voiceUsed": "shimmer"
}
📜 License
This project is licensed under the MIT License - see the LICENSE file for details.
🕒 Changelog
See the CHANGELOG.md file for details on version history.
Made with ❤️ by Pink Pixel
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










