- Explore MCP Servers
- mcp-mem0-docker
Mcp Mem0 Docker
What is Mcp Mem0 Docker
mcp-mem0-docker is a long-term memory management solution for AI agents based on the mem0.ai framework. It allows users to store, retrieve, and manage memories efficiently using the MCP protocol.
Use cases
Use cases include enhancing AI chatbots with long-term memory, developing personalized AI assistants, and creating applications that require context-aware interactions.
How to use
To use mcp-mem0-docker, ensure you have Docker and Ollama installed. Pull the necessary models and embeddings, run the MCP server using Docker Compose, and configure your MCP client to connect to the server.
Key features
Key features include memory management tools such as memory_save for storing information, memory_get_all for retrieving memories, memory_search for semantic searching, and memory_delete for removing specific memories.
Where to use
mcp-mem0-docker can be used in various fields including AI development, natural language processing, and any application requiring persistent memory for AI agents.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Mcp Mem0 Docker
mcp-mem0-docker is a long-term memory management solution for AI agents based on the mem0.ai framework. It allows users to store, retrieve, and manage memories efficiently using the MCP protocol.
Use cases
Use cases include enhancing AI chatbots with long-term memory, developing personalized AI assistants, and creating applications that require context-aware interactions.
How to use
To use mcp-mem0-docker, ensure you have Docker and Ollama installed. Pull the necessary models and embeddings, run the MCP server using Docker Compose, and configure your MCP client to connect to the server.
Key features
Key features include memory management tools such as memory_save for storing information, memory_get_all for retrieving memories, memory_search for semantic searching, and memory_delete for removing specific memories.
Where to use
mcp-mem0-docker can be used in various fields including AI development, natural language processing, and any application requiring persistent memory for AI agents.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
MCP-Mem0-docker: Long-Term Memory for AI Agents
Goals
- Own your memories, Own your data!
- Run locally on any computer with Docker and Ollama.
Plus
- Use any LLM provider, currently (OpenAI, OpenRouter, Ollama)
- Provide a simple and effective way to manage long-term memory for AI agents using the MCP protocol.
- Enhance your AI’s capabilities with persistent memory storage.
Features
Based on mem0.ai
Provides 4 tools for managing long-term memory:
memory_save: Store any information in long-term memory.memory_get_all: Retrieve stored memories in contextual format.memory_search: Find Memories using semantic searchmemory_delete: Delete memories by searching an specific.
Prerequisites
- Ollama running locally.
- Docker.
- Any MCP client (VS Code, Claude Desktop, Cursor, Windsurf, etc.)
Fast Start
Pull models and embeddings to Ollama
Make sure you have Ollama installed and running locally. You can pull the models and embeddings using the following commands:
ollama pull qwen2.5:3b ollama pull nomic-embed-text ollama serve
Run the MCP Server and the database
docker compose up -d
Configure the MCP Client
For example, if you are using the MCP client in VS Code, you can create a configuration file like this: in your project root: ~/.mcp/config.json:
{
"mcpServers": {
"mem0": {
"transport": "sse",
"url": "http://localhost:8050/sse"
}
}
}
Windsurf
Use serverUrl instead of url in your configuration:
Development
Best way to develop or tweak around is to go to Fast Start section and run the server/database with docker compose, then you can stop the MCP server and keep the database running, and run the server locally with uv:
- Python 3.12+
- uv
Using uv
-
Install uv:
pip install uv uv venv .venv .venv\Scripts\activate -
Install dependencies:
uv pip install -e . -
Create a
.envfile based on.env.example:cp .env.example .env -
Configure your environment variables in the
.envfile (see Configuration section) -
Start coding and the server locally:
uv run src/main.py -
Rebuild with docker compose no cache:
docker compose up -d --build --no-cache
Configuration .env
The following environment variables can be configured in your .env file:
| Variable | Description | Default .env | Notes |
|---|---|---|---|
TRANSPORT |
Transport protocol (sse or stdio) | sse |
Recommended to use sse for better performance |
HOST |
Host to bind to when using SSE transport | 0.0.0.0 |
This allows access from any IP address |
PORT |
Port to listen on when using SSE transport | 8050 |
Change if needed |
LLM_PROVIDER |
LLM provider (openai, openrouter, or ollama) | ollama |
Use ollama for local models |
LLM_BASE_URL |
Base URL for the LLM API | http://localhost:11434 |
Default for Ollama |
LLM_API_KEY |
API key for the LLM provider | Empty | Required for OpenAI and OpenRouter |
LLM_MODEL |
LLM model to use | qwen2.5:3b |
Change to your desired model |
LLM_EMBEDDING_MODEL |
Embedding model to use | nomic-embed-text |
Change to your desired embedding model |
DATABASE_URL |
PostgreSQL connection | postgresql://user:pass@host:port/db |
Change to your PostgreSQL connection string |
Note for n8n users
Use host.docker.internal instead of localhost so that the MCP server can be accessed from the n8n container. This is necessary because localhost in the n8n container refers to the container itself, not your host machine.
The URL in the MCP node would be: http://host.docker.internal:8050/sse
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










