- Explore MCP Servers
- prompt-library
Prompt Library
What is Prompt Library
Prompt-library is a tool designed to run prompts against files using various LLM models, specifically supporting MCP.
Use cases
Use cases include summarizing documents, extracting key themes, generating semantic commit messages, and creating API documentation.
How to use
To use prompt-library, install the ‘llm’ command line tool, clone the repository, and make the script executable. You can run prompts using the command ‘run-prompt <prompt_file> <input_file>’, with an optional model specification.
Key features
Key features include support for multiple LLM models, a variety of available prompts for content analysis, code review, and repository analysis, as well as ZSH completion for ease of use.
Where to use
Prompt-library can be used in fields such as software development, content creation, and data analysis, where automated prompt execution can enhance productivity.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Prompt Library
Prompt-library is a tool designed to run prompts against files using various LLM models, specifically supporting MCP.
Use cases
Use cases include summarizing documents, extracting key themes, generating semantic commit messages, and creating API documentation.
How to use
To use prompt-library, install the ‘llm’ command line tool, clone the repository, and make the script executable. You can run prompts using the command ‘run-prompt <prompt_file> <input_file>’, with an optional model specification.
Key features
Key features include support for multiple LLM models, a variety of available prompts for content analysis, code review, and repository analysis, as well as ZSH completion for ease of use.
Where to use
Prompt-library can be used in fields such as software development, content creation, and data analysis, where automated prompt execution can enhance productivity.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Prompt Runner
A simple tool to run prompts against files using various LLM models.
Installation
- Make sure you have the
llmcommand line tool installed - Clone this repository
- Make the script executable:
chmod +x run-prompt
ZSH Completion
To enable zsh completion for run-prompt:
(update to where you have this installed)
# Add this to your .zshrc
PROMPT_LIBRARY_PATH="/Users/wschenk/prompt-library"
fpath=($PROMPT_LIBRARY_PATH $fpath)
export PATH="$PROMPT_LIBRARY_PATH:$PATH"
autoload -Uz compinit
compinit
Usage
CLI
run-prompt <prompt_file> <input_file>
You can optionally specify a different model using the MODEL environment variable:
MODEL=claude-3.7-sonnet run-prompt <prompt_file> <input_file>
The default model is claude-3.7-sonnet.
MCP Server
npx @modelcontextprotocol/inspector uv run run-prompt mcp
Available Prompts
Content Analysis
- summarize.md - Generate 5 different two-sentence summaries to encourage readership
- key-themes.md - Extract key themes from the input text
- linkedin.md - Format content as an engaging LinkedIn post
Code Review
- lint.md - Assess code quality and suggest improvements
- git-commit-message.md - Generate semantic commit messages from code diffs
Repository Analysis
- architecture-review.md - Review architectural patterns and decisions
- api-documentation.md - Generate API documentation
- performance-review.md - Analyze performance considerations
- security-review.md - Review security implications
- developer-guide.md - Create developer documentation
Examples
Summarize a README file:
./run-prompt content/summarize README.md
Extract key themes from a document:
./run-prompt content/key-themes document.txt
Format content for LinkedIn:
./run-prompt content/linkedin article.txt
Generate a commit message:
./run-prompt code/git-commit-message.md diff.txt
Review code quality:
./run-prompt code/lint.md source_code.py
Adding New Prompts
Add new prompt files to the appropriate directory:
content/- For content analysis and formatting promptscode/- For code-related promptscode/repomix/- For repository analysis prompts
The prompt file should contain the instructions/prompt that will be sent to the LLM along with the content of your input file.
Usage examples
ollama
cat repomix-output.txt | ollama run gemma3:12b "$(cat ~/prompts/code/repomix/developer-guide.md )"
llm install llm-ollama
llm
MODEL=${MODEL:-claude-3.7-sonnet}
cat repomix-output.txt | \
llm -m $MODEL \
"$(cat ~/prompts/code/repomix/developer-guide.md )"
Prompt Viewer PWA
This repository also includes a Progressive Web App (PWA) for browsing and copying prompts on mobile devices.
Features
- 📱 Install as a mobile app
- 🔍 Browse all prompts with folder navigation
- 📋 One-click copy to clipboard
- 🌐 Works offline
- 🌓 Auto light/dark mode
Deployment
The PWA is located in the pwa/ directory. To deploy it on GitHub Pages:
- Go to Settings → Pages in your GitHub repository
- Select “Deploy from a branch”
- Choose your main branch and
/pwafolder as the source - Save the settings
After a few minutes, your PWA will be available at:
https://the-focus-ai.github.io/prompt-library/
Using the PWA
- Visit the URL on your mobile device
- You’ll see an “Add to Home Screen” prompt (or use browser menu)
- Once installed, it works like a native app
- Click refresh to cache all prompts for offline use
For more details, see pwa/deployment.md.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










