MCP ExplorerExplorer

Kno2gether Livekit N8n Mcp

@avijeett007on 10 months ago
2 MIT
FreeCommunity
AI Systems
Contains LiveKiT & N8N MCP Server Integration Examples

Overview

What is Kno2gether Livekit N8n Mcp

Kno2gether-Livekit-N8N-MCP is a project that integrates LiveKit and n8n to create a powerful voice AI assistant capable of interacting with external services through the Multimodal Control Protocol (MCP).

Use cases

Use cases include creating a voice-enabled customer support agent, developing interactive educational applications, and building personal voice assistants that can control smart home devices.

How to use

To use Kno2gether-Livekit-N8N-MCP, clone the repository, install the required packages, and configure your API keys in a .env file. Follow the tutorial for step-by-step guidance on building the voice AI assistant.

Key features

Key features include voice-based interaction, integration with MCP tools, speech-to-text transcription using Deepgram, natural language processing with OpenAI’s GPT-4o, text-to-speech capabilities, voice activity detection, and real-time communication powered by LiveKit.

Where to use

Kno2gether-Livekit-N8N-MCP can be used in various fields such as customer service, virtual assistants, educational tools, and any application requiring voice interaction with external services.

Content

Kno2gether-Livekit-N8N-MCP

Kno2gether LiveKit MCP Integration](https://youtu.be/ClVweoou9dA?si=7RSRCUxNN0Tff1FK)

This project is a fork of basic-mcp that showcases a powerful voice assistant using LiveKit Agents framework and n8n (Nodemation) with Multimodal Control Protocol (MCP) tools integration for external services.

🚀 Introducing Kno2gether LiveKit Integration

This project demonstrates how to build a voice AI assistant that can interact with external services through MCP tools. The assistant can understand natural language commands and perform actions using connected services.

🌟 Watch Our Tutorial

Learn how to build this project step-by-step in our detailed tutorial:

LiveKit Voice AI with n8n MCP Integration Tutorial

Click the image above to watch the tutorial on YouTube

Features

  • Voice-based interaction with a helpful AI assistant
  • Integration with MCP tools from external servers (like n8n)
  • Speech-to-text using Deepgram for accurate transcription
  • Natural language processing using OpenAI’s GPT-4o
  • Text-to-speech using OpenAI for natural-sounding responses
  • Voice activity detection using Silero
  • Real-time communication powered by LiveKit

Prerequisites

  • Python 3.9+
  • API keys for:
    • OpenAI (for LLM and TTS)
    • Deepgram (for STT)
  • MCP server endpoint (n8n with MCP node)
  • LiveKit server (Cloud or self-hosted)

Installation

  1. Clone this repository:

    git clone https://github.com/avijeett007/Kno2gether-Livekit-N8N-MCP.git
    cd Kno2gether-Livekit-N8N-MCP
    
  2. Install the required packages:

    pip install -r requirements.txt
    
  3. Create a .env file with your API keys and configuration:

    OPENAI_API_KEY=your_openai_api_key
    DEEPGRAM_API_KEY=your_deepgram_api_key
    ZAPIER_MCP_URL=your_mcp_server_url
    

Usage

Run the agent with the LiveKit CLI:

python agent.py console

The agent will connect to the specified LiveKit room and start listening for voice commands. It will use the MCP tools from the connected server to perform actions based on user requests.

Project Structure

  • agent.py: Main agent implementation and entrypoint
  • mcp_client/: Package for MCP server integration
    • server.py: MCP server connection handlers
    • agent_tools.py: Integration of MCP tools with LiveKit agents
    • util.py: Utility functions for MCP client

How It Works

  1. Voice Processing: LiveKit and Deepgram convert your voice to text with high accuracy
  2. AI Understanding: OpenAI’s GPT-4o processes your requests and determines the appropriate actions
  3. Tool Integration: MCP provides seamless access to external services through n8n
  4. Natural Response: OpenAI TTS converts the AI’s response to natural-sounding speech
  5. Real-time Communication: LiveKit handles the WebRTC connection for low-latency audio

Example Voice Commands

Once connected, you can have natural conversations with the assistant. The specific commands will depend on the MCP tools you’ve configured, but could include:

  • “What’s the weather like today?”
  • “Send a message to my team about the meeting”
  • “Create a new task in my project management tool”
  • “Look up information about [topic]”

Subscribe to Kno2gether

For more AI tutorials and projects like this one, subscribe to our YouTube channel.

Acknowledgements

  • LiveKit for the underlying real-time communication infrastructure
  • OpenAI for GPT-4o and text-to-speech
  • Deepgram for speech-to-text
  • Silero for Voice Activity Detection
  • n8n for workflow automation and MCP integration
  • basic-mcp for the original codebase this project is forked from

License

This project is licensed under the MIT License - see the LICENSE file for details.

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers