MCP ExplorerExplorer

Smallest Ai Mcp

@Akshay-Sisodiaon 10 months ago
1 MIT
FreeCommunity
AI Systems
Production-grade ModelContextProtocol (MCP) server for the Smallest AI Waves platform. Exposes all Waves TTS and voice cloning features as MCP tools and resources. Ready for deployment.

Overview

What is Smallest Ai Mcp

smallest-ai-mcp is a production-grade ModelContextProtocol (MCP) server designed for the Waves Text-to-Speech (TTS) and voice cloning platform, providing a fast and portable solution for real-world AI voice workflows.

Use cases

Use cases for smallest-ai-mcp include creating personalized voice assistants, generating audio content for videos, developing interactive voice response systems, and enabling voice cloning for entertainment or accessibility purposes.

How to use

To use smallest-ai-mcp, clone the repository from GitHub, install the necessary dependencies, configure your API key, and start the server using Python. Alternatively, it can also be deployed using Docker.

Key features

Key features of smallest-ai-mcp include listing and previewing available voices, synthesizing high-quality speech from text, cloning voices, and managing cloned voices, all implemented as MCP tools.

Where to use

smallest-ai-mcp can be used in various fields such as AI voice applications, virtual assistants, content creation, and any scenario requiring high-quality text-to-speech or voice cloning capabilities.

Content

Waves Logo

Smallest AI MCP Server

Production-grade ModelContextProtocol (MCP) server for the Waves Text-to-Speech and Voice Cloning platform.
Fast, portable, and ready for real-world AI voice workflows.


🚀 Overview

Smallest AI MCP Server provides a seamless bridge between the powerful Waves TTS/Voice Cloning API and any MCP-compatible LLM or agent. It is designed for speed, security, and ease of deployment.


✨ Features

  • 🎤 List and preview voices — Instantly fetch all available voices from Waves.
  • 🗣️ Synthesize speech — Convert text to high-quality WAV audio files.
  • 👤 Clone voices — Create instant/professional voice clones.
  • 🗂️ Manage clones — List and delete your cloned voices.

All features are implemented as MCP tools, with no placeholders or stubs.


⚡ Quickstart

# 1. Clone the repo
$ git clone https://github.com/Akshay-Sisodia/smallest-ai-mcp.git
$ cd smallest-ai-mcp

# 2. Install dependencies
$ pip install -r requirements.txt

# 3. Configure your API key
$ cp .env.example .env
# Edit .env and add your real WAVES_API_KEY

# 4. Start the server
$ python server.py

🐳 Docker Usage

# Build the Docker image
$ docker build -t smallest-ai-mcp .

# Run the container
$ docker run -p 8000:8000 \
    -e WAVES_API_KEY=your_waves_api_key \
    smallest-ai-mcp

🛠️ Tech Stack


🏗️ Production & Deployment

  • Environment: Copy .env.example to .env and add your API key. Never commit secrets to git.
  • Dependencies: Install with pip install -r requirements.txt (Python 3.11+).
  • Docker: Use the provided Dockerfile for containerization.
  • Security: API keys are required at startup and never exposed.
  • License: MIT (see LICENSE).

🤝 Contributing

Pull requests and issues are welcome! Please open an issue to discuss major changes.


👤 Maintainer


📄 License

MIT

Groq MCP Client

A Streamlit application that connects to an MCP (Model Context Protocol) server and uses Groq’s LLM API for chat conversations with tool execution capabilities.

Features

  • Connect to any MCP server using the official MCP SDK via SSE (Server-Sent Events)
  • Asynchronous communication with the MCP server
  • Chat interface with streaming responses from Groq
  • Tool execution through the MCP server
  • Clean and user-friendly UI

Requirements

  • Python 3.8+
  • Groq API key
  • An MCP server that supports SSE (running on HTTP)
  • MCP SDK (automatically installed with requirements.txt)

Installation

  1. Clone this repository
  2. Install the dependencies:
pip install -r requirements.txt

Usage

  1. Run the application:
streamlit run groq_mcp_client.py
  1. In the Streamlit UI:
    • Enter your Groq API key in the sidebar
    • Enter the URL of your MCP server (default: http://localhost:8000)
    • Click “Connect to MCP Server”
    • Start chatting!

How it works

  1. The application starts and connects to the MCP server using the official MCP SDK via SSE
  2. The MCP server provides a list of available tools
  3. When you send a message:
    • The message is sent to Groq’s API
    • If Groq decides to use a tool, the tool call is executed through the MCP server
    • The tool results are sent back to Groq
    • Groq provides a final response

Implementation Details

  • Uses the official MCP SDK for communication with MCP servers
  • Connects via SSE (Server-Sent Events) for HTTP-based servers
  • Implements async/await pattern for efficient server communication
  • Maintains compatibility with the Streamlit UI framework

Customization

You can modify the following aspects of the application:

  • Change the Groq model by modifying the model parameter in the GroqClient.generate_stream method
  • Customize the UI by modifying the Streamlit components
  • Add additional functionality to the MCP client

License

MIT

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers