- Explore MCP Servers
- mcp-waifu-chat
Mcp Waifu Chat
What is Mcp Waifu Chat
mcp-waifu-chat is a FastMCP server designed for a conversational AI waifu character. It manages user interactions, dialog history, and basic chat functionalities, utilizing SQLite for data persistence.
Use cases
Use cases for mcp-waifu-chat include creating interactive chatbots for games, developing virtual companions for entertainment, and providing automated customer support through AI-driven conversations.
How to use
To use mcp-waifu-chat, clone the repository, install the required dependencies, create a virtual environment, and configure the server using environment variables or a .env file. Start the server to begin interacting with the AI waifu.
Key features
Key features include user management, dialog history storage, basic chat functionality with mocked AI responses, modular design for easy extensions, and comprehensive unit testing.
Where to use
mcp-waifu-chat can be used in various fields such as entertainment, gaming, and customer support, where conversational AI can enhance user engagement and interaction.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Waifu Chat
mcp-waifu-chat is a FastMCP server designed for a conversational AI waifu character. It manages user interactions, dialog history, and basic chat functionalities, utilizing SQLite for data persistence.
Use cases
Use cases for mcp-waifu-chat include creating interactive chatbots for games, developing virtual companions for entertainment, and providing automated customer support through AI-driven conversations.
How to use
To use mcp-waifu-chat, clone the repository, install the required dependencies, create a virtual environment, and configure the server using environment variables or a .env file. Start the server to begin interacting with the AI waifu.
Key features
Key features include user management, dialog history storage, basic chat functionality with mocked AI responses, modular design for easy extensions, and comprehensive unit testing.
Where to use
mcp-waifu-chat can be used in various fields such as entertainment, gaming, and customer support, where conversational AI can enhance user engagement and interaction.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
MCP Waifu Chat Server
This project implements a basic MCP (Model Context Protocol) server for a conversational AI “waifu” character. It uses the mcp library for Python to handle the protocol details and FastMCP for easy server setup.
Features
- User management (create, check existence, delete, count)
- Dialog history storage (get, set, reset)
- Basic chat functionality (using Google Gemini API)
- Modular design for easy extension
- Configuration via environment variables and API key file
- SQLite database for persistence
- Comprehensive unit tests
Requirements
- Python 3.10+
uv- A Google Gemini API Key
Installation
-
Clone the repository:
git clone <repository_url> cd mcp-waifu-chat -
Install uv (if not installed)
With curl:
curl -LsSf https://astral.sh/uv/install.sh | sh
Or with powershell:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
-
Create and activate a virtual environment:
uv venv source .venv/bin/activate # On Linux/macOS .\.venv\Scripts\activate # On Windows -
Install dependencies (this includes
google-generativeainow):uv sync --all-extras --dev
Configuration
The server uses a combination of a file for the API key and environment variables (or a .env file) for other configurations.
API Key:
- The Google Gemini API key is read directly from the file
~/.api-gemini(i.e., a file named.api-geminiin your home directory). Ensure this file exists and contains only your API key. You can obtain a key from Google AI Studio.
Other Configuration (.env file or environment variables):
An example .env.example file is provided for other settings:
DATABASE_FILE=dialogs.db DEFAULT_RESPONSE="I'm sorry, I'm having trouble connecting to the AI model." DEFAULT_GENRE="Fantasy" FLASK_PORT=5000 GEMINI_MODEL_NAME=gemini-2.5-pro
DATABASE_FILE: Path to the SQLite database file (default:dialogs.db).DEFAULT_RESPONSE: The default response to send when the AI model is unavailable (default: “The AI model is currently unavailable. Please try again later.”).DEFAULT_GENRE: The default conversation genre (default: “Romance”).FLASK_PORT: The port the server will listen on (default: 5000).GEMINI_MODEL_NAME: The specific Gemini model to use (default:gemini-2.5-pro).
Copy .env.example to .env and customize the values as needed (except for the API key, which is read from ~/.api-gemini).
Running the Server
Ensure your ~/.api-gemini file is set up correctly. Then, to run the server, use:
uv run mcp-waifu-chat
This runs the mcp_waifu_chat/api.py file (since that’s where the FastMCP instance is defined) and starts up the server.
Running Tests
To run the unit tests:
uv run pytest
This will execute all tests in the tests/ directory using pytest. The tests include database tests and API endpoint tests. Note: API tests currently do not cover the live Gemini interaction and may need adaptation for mocking.
API Endpoints
The server provides the following MCP-compliant endpoints (using FastMCP’s automatic routing):
Server Status
/v1/server/status(GET): Checks the server status. Returns{"status": "ok"}. This is a standard MCP endpoint.
User Management Tools
These are implemented as MCP tools.
create_user(user_id: str): Creates a new user.check_user(user_id: str): Checks if a user exists. Returns{"user_id": str, "exists": bool}.delete_user(user_id: str): Deletes a user.user_count: returns the number of users in the database for the current user.
Dialog Management Tools
reset_dialog(user_id: str)
Resources
/v1/user/dialog/json/{user_id}: Dynamic resource to return dialogs as JSON./v1/user/dialog/str/{user_id}: Dynamic resource to return dialogs as a string
Chat Tool
chat(message: str, user_id: str): Sends a chat message and gets a response generated by the configured Google Gemini model.
LLM Integration (Google Gemini)
The ai.generate_response function in mcp_waifu_chat/ai.py now integrates directly with the Google Gemini API.
- It uses the
google-generativeailibrary. - The API key is read from
~/.api-gemini. - The specific model used is determined by the
GEMINI_MODEL_NAMEenvironment variable (or the default inconfig.py). - Basic error handling and safety checks are included.
Deploying to Production
For a production deployment, you should:
-
Use a production-ready WSGI/ASGI server: Gunicorn is recommended and included in the
pyproject.toml. Example command:gunicorn --workers 4 --bind 0.0.0.0:8000 mcp_waifu_chat.api:app -k uvicorn.workers.UvicornWorkerThis runs the
appobject (ourFastMCPinstance) frommcp_waifu_chat/api.pyusing 4 Uvicorn workers managed by Gunicorn, listening on port 8000. Adjust the number of workers and the port as needed. -
Use a robust database: Consider PostgreSQL or MySQL instead of SQLite for higher concurrency and scalability.
-
Implement proper logging: Configure logging to write to files, a centralized logging service, or a monitoring system.
-
Secure your server: Use HTTPS, implement authentication/authorization, and follow security best practices for web applications.
-
Consider a reverse proxy: Use a reverse proxy like Nginx or Apache to handle TLS termination, load balancing, and static file serving.
-
Containerize Use Docker to simplify deployment.
Project Structure Explanation
mcp_waifu_chat/(Main Package):__init__.py: Makes the directory a Python package.api.py: The core FastMCP application, tool/resource definitions, and request handling logic.config.py: Handles loading and validating configuration settings.db.py: All database interaction logic (creating tables, querying, updating).models.py: Pydantic models for request/response data validation and serialization.utils.py: Helper functions, likedialog_to_jsonandjson_to_dialog.ai.py: This module is responsible for interacting with the Google Gemini AI model.
tests/(Test Suite):conftest.py: pytest configuration, including fixtures for the test database and test client.test_db.py: Unit tests for thedb.pymodule.test_api.py: Unit tests for the API endpoints inapi.py.
run.py:: Simple file to run the server (Note:uv run mcp-waifu-chatis preferred).
This structure promotes modularity, testability, and maintainability. Each module has a specific responsibility, making it easier to understand, modify, and extend the codebase.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










