- Explore MCP Servers
- YT-to-LinkedIn-MCP-Server
Yt To Linkedin Mcp Server
What is Yt To Linkedin Mcp Server
YT-to-LinkedIn-MCP-Server is a Model Context Protocol (MCP) server designed to automate the process of generating LinkedIn post drafts from YouTube video transcripts. It utilizes advanced language models to create high-quality, editable content.
Use cases
Use cases include content marketers generating LinkedIn posts from webinars, educators summarizing lecture videos for professional networking, and businesses creating promotional content from product launch videos.
How to use
To use YT-to-LinkedIn-MCP-Server, clone the repository, set up a virtual environment, install dependencies, configure your API keys in a .env file, and run the application using Uvicorn. For Docker deployment, build the Docker image and run the container with the necessary environment variables.
Key features
Key features include YouTube transcript extraction, concise transcript summarization using OpenAI GPT, professional LinkedIn post generation with customizable tone and style, a modular API design for easy integration, and containerized deployment options.
Where to use
YT-to-LinkedIn-MCP-Server can be used in marketing, content creation, social media management, and any field that requires transforming video content into engaging written posts for LinkedIn.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Yt To Linkedin Mcp Server
YT-to-LinkedIn-MCP-Server is a Model Context Protocol (MCP) server designed to automate the process of generating LinkedIn post drafts from YouTube video transcripts. It utilizes advanced language models to create high-quality, editable content.
Use cases
Use cases include content marketers generating LinkedIn posts from webinars, educators summarizing lecture videos for professional networking, and businesses creating promotional content from product launch videos.
How to use
To use YT-to-LinkedIn-MCP-Server, clone the repository, set up a virtual environment, install dependencies, configure your API keys in a .env file, and run the application using Uvicorn. For Docker deployment, build the Docker image and run the container with the necessary environment variables.
Key features
Key features include YouTube transcript extraction, concise transcript summarization using OpenAI GPT, professional LinkedIn post generation with customizable tone and style, a modular API design for easy integration, and containerized deployment options.
Where to use
YT-to-LinkedIn-MCP-Server can be used in marketing, content creation, social media management, and any field that requires transforming video content into engaging written posts for LinkedIn.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
YouTube to LinkedIn MCP Server
A Model Context Protocol (MCP) server that automates generating LinkedIn post drafts from YouTube videos. This server provides high-quality, editable content drafts based on YouTube video transcripts.
Features
- YouTube Transcript Extraction: Extract transcripts from YouTube videos using video URLs
- Transcript Summarization: Generate concise summaries of video content using OpenAI GPT
- LinkedIn Post Generation: Create professional LinkedIn post drafts with customizable tone and style
- Modular API Design: Clean FastAPI implementation with well-defined endpoints
- Containerized Deployment: Ready for deployment on Smithery
Setup Instructions
Prerequisites
- Python 3.8+
- Docker (for containerized deployment)
- OpenAI API Key
- YouTube Data API Key (optional, but recommended for better metadata)
Local Development
-
Clone the repository:
git clone <repository-url> cd yt-to-linkedin -
Create a virtual environment and install dependencies:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt -
Create a
.envfile in the project root with your API keys:OPENAI_API_KEY=your_openai_api_key YOUTUBE_API_KEY=your_youtube_api_key -
Run the application:
uvicorn app.main:app --reload -
Access the API documentation at http://localhost:8000/docs
Docker Deployment
-
Build the Docker image:
docker build -t yt-to-linkedin-mcp . -
Run the container:
docker run -p 8000:8000 --env-file .env yt-to-linkedin-mcp
Smithery Deployment
-
Ensure you have the Smithery CLI installed and configured.
-
Deploy to Smithery:
smithery deploy
API Endpoints
1. Transcript Extraction
Endpoint: /api/v1/transcript
Method: POST
Description: Extract transcript from a YouTube video
Request Body:
Response:
{
"video_id": "VIDEO_ID",
"video_title": "Video Title",
"transcript": "Full transcript text...",
"language": "en",
"duration_seconds": 600,
"channel_name": "Channel Name",
"error": null
}
2. Transcript Summarization
Endpoint: /api/v1/summarize
Method: POST
Description: Generate a summary from a video transcript
Request Body:
Response:
{
"summary": "Generated summary text...",
"word_count": 200,
"key_points": [
"Key point 1",
"Key point 2",
"Key point 3"
]
}
3. LinkedIn Post Generation
Endpoint: /api/v1/generate-post
Method: POST
Description: Generate a LinkedIn post from a video summary
Request Body:
Response:
{
"post_content": "Generated LinkedIn post content...",
"character_count": 800,
"estimated_read_time": "About 1 minute",
"hashtags_used": [
"#ai",
"#machinelearning"
]
}
4. Output Formatting
Endpoint: /api/v1/output
Method: POST
Description: Format the LinkedIn post for output
Request Body:
{
"post_content": "LinkedIn post content...",
"format": "json"
}
Response:
{
"content": {
"post_content": "LinkedIn post content...",
"character_count": 800
},
"format": "json"
}
Environment Variables
| Variable | Description | Required |
|---|---|---|
| OPENAI_API_KEY | OpenAI API key for summarization and post generation | No (can be provided in requests) |
| YOUTUBE_API_KEY | YouTube Data API key for fetching video metadata | No (can be provided in requests) |
| PORT | Port to run the server on (default: 8000) | No |
Note: While environment variables for API keys are optional (as they can be provided in each request), it’s recommended to set them for local development and testing. When deploying to Smithery, users will need to provide their own API keys in the requests.
License
MIT
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










