MCP ExplorerExplorer

Mcp Mem0 Docker

@biojazzardon 10 months ago
3 GPL-3.0
FreeCommunity
AI Systems
MCP-Mem0-docker: Long-Term Memory for AI Agents based on mem0.ai

Overview

What is Mcp Mem0 Docker

mcp-mem0-docker is a long-term memory management solution for AI agents based on the mem0.ai framework. It allows users to store, retrieve, and manage memories efficiently using the MCP protocol.

Use cases

Use cases include enhancing AI chatbots with long-term memory, developing personalized AI assistants, and creating applications that require context-aware interactions.

How to use

To use mcp-mem0-docker, ensure you have Docker and Ollama installed. Pull the necessary models and embeddings, run the MCP server using Docker Compose, and configure your MCP client to connect to the server.

Key features

Key features include memory management tools such as memory_save for storing information, memory_get_all for retrieving memories, memory_search for semantic searching, and memory_delete for removing specific memories.

Where to use

mcp-mem0-docker can be used in various fields including AI development, natural language processing, and any application requiring persistent memory for AI agents.

Content

MCP-Mem0-docker: Long-Term Memory for AI Agents

Goals

  • Own your memories, Own your data!
  • Run locally on any computer with Docker and Ollama.

Plus

  • Use any LLM provider, currently (OpenAI, OpenRouter, Ollama)
  • Provide a simple and effective way to manage long-term memory for AI agents using the MCP protocol.
  • Enhance your AI’s capabilities with persistent memory storage.

Features

Based on mem0.ai

Provides 4 tools for managing long-term memory:

  1. memory_save: Store any information in long-term memory.
  2. memory_get_all: Retrieve stored memories in contextual format.
  3. memory_search: Find Memories using semantic search
  4. memory_delete: Delete memories by searching an specific.

Prerequisites

  • Ollama running locally.
  • Docker.
  • Any MCP client (VS Code, Claude Desktop, Cursor, Windsurf, etc.)

Fast Start

Pull models and embeddings to Ollama

Make sure you have Ollama installed and running locally. You can pull the models and embeddings using the following commands:

ollama pull qwen2.5:3b
ollama pull nomic-embed-text
ollama serve

Run the MCP Server and the database

docker compose up -d

Configure the MCP Client

For example, if you are using the MCP client in VS Code, you can create a configuration file like this: in your project root: ~/.mcp/config.json:

{
  "mcpServers": {
    "mem0": {
      "transport": "sse",
      "url": "http://localhost:8050/sse"
    }
  }
}

Windsurf

Use serverUrl instead of url in your configuration:

Development

Best way to develop or tweak around is to go to Fast Start section and run the server/database with docker compose, then you can stop the MCP server and keep the database running, and run the server locally with uv:

  • Python 3.12+
  • uv

Using uv

  1. Install uv:

    pip install uv
    uv venv .venv
    .venv\Scripts\activate
    
  2. Install dependencies:

    uv pip install -e .
    
  3. Create a .env file based on .env.example:

    cp .env.example .env
    
  4. Configure your environment variables in the .env file (see Configuration section)

  5. Start coding and the server locally:

    uv run src/main.py
    
  6. Rebuild with docker compose no cache:

    docker compose up -d --build --no-cache
    

Configuration .env

The following environment variables can be configured in your .env file:

Variable Description Default .env Notes
TRANSPORT Transport protocol (sse or stdio) sse Recommended to use sse for better performance
HOST Host to bind to when using SSE transport 0.0.0.0 This allows access from any IP address
PORT Port to listen on when using SSE transport 8050 Change if needed
LLM_PROVIDER LLM provider (openai, openrouter, or ollama) ollama Use ollama for local models
LLM_BASE_URL Base URL for the LLM API http://localhost:11434 Default for Ollama
LLM_API_KEY API key for the LLM provider Empty Required for OpenAI and OpenRouter
LLM_MODEL LLM model to use qwen2.5:3b Change to your desired model
LLM_EMBEDDING_MODEL Embedding model to use nomic-embed-text Change to your desired embedding model
DATABASE_URL PostgreSQL connection postgresql://user:pass@host:port/db Change to your PostgreSQL connection string

Note for n8n users

Use host.docker.internal instead of localhost so that the MCP server can be accessed from the n8n container. This is necessary because localhost in the n8n container refers to the container itself, not your host machine.

The URL in the MCP node would be: http://host.docker.internal:8050/sse

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers