- Explore MCP Servers
- thirdbrain-mcp-openai-agent
Thirdbrain Mcp Openai Agent
What is Thirdbrain Mcp Openai Agent
thirdbrain-mcp-openai-agent is an MCP agent designed for OpenAI, created during the oTTomator hackathon. It enables the LiveAgentStudio to function similarly to Claude Desktop but is tailored for OpenAI-compatible models, making MCP tools accessible to various language models (LLMs) like deepseek.
Use cases
Use cases for thirdbrain-mcp-openai-agent include planning activities like ski trips by querying weather conditions and comparing ski resorts. It can also be applied in customer service scenarios where automated responses are generated based on real-time data.
How to use
To use thirdbrain-mcp-openai-agent, follow the detailed installation instructions provided in the hackathon documentation. Ensure you have the necessary environment set up, including Python 3.12, pip, and an OpenAI or Deepseek API key if required. You can run the agent in environments like Docker, Node, or Python.
Key features
Key features of thirdbrain-mcp-openai-agent include the ability to centralize resources, prompts, and tools across platforms, enabling a DRY (Don’t Repeat Yourself) approach for distributed agentic systems. It supports various OpenAI models and allows for iterative tool calling to gather information, such as weather forecasts.
Where to use
thirdbrain-mcp-openai-agent can be used in various fields, particularly in applications requiring natural language processing, such as chatbots, virtual assistants, and any system that benefits from accessing multiple data sources and tools through OpenAI-compatible models.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Thirdbrain Mcp Openai Agent
thirdbrain-mcp-openai-agent is an MCP agent designed for OpenAI, created during the oTTomator hackathon. It enables the LiveAgentStudio to function similarly to Claude Desktop but is tailored for OpenAI-compatible models, making MCP tools accessible to various language models (LLMs) like deepseek.
Use cases
Use cases for thirdbrain-mcp-openai-agent include planning activities like ski trips by querying weather conditions and comparing ski resorts. It can also be applied in customer service scenarios where automated responses are generated based on real-time data.
How to use
To use thirdbrain-mcp-openai-agent, follow the detailed installation instructions provided in the hackathon documentation. Ensure you have the necessary environment set up, including Python 3.12, pip, and an OpenAI or Deepseek API key if required. You can run the agent in environments like Docker, Node, or Python.
Key features
Key features of thirdbrain-mcp-openai-agent include the ability to centralize resources, prompts, and tools across platforms, enabling a DRY (Don’t Repeat Yourself) approach for distributed agentic systems. It supports various OpenAI models and allows for iterative tool calling to gather information, such as weather forecasts.
Where to use
thirdbrain-mcp-openai-agent can be used in various fields, particularly in applications requiring natural language processing, such as chatbots, virtual assistants, and any system that benefits from accessing multiple data sources and tools through OpenAI-compatible models.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
ThirdBrAIn MCP OpenAI agent
This agent created for the oTTomator hackathon makes the LiveAgentStudio work in similar ways to Claude Desktop but for OpenAI compatible models.
This agent makes mcp tools accessible to openai compatible llms (chat completions).
This enables the use of MCP servers across a much wider range of models such as:
- evidently OpenAI such as GPT-4o-mini, GPT-4o
- deepseek-chat (currently v3) , at this point deepseek-r1 does not support function calling.
- ollama with deepseek variants (not tested yet).
MCP makes it possible to centralize resources (RAG), prompts and tools and use the same code across platforms.
It is an extreme DRY concept for distributed agentic systems that hides the implementation details.
For more information see https://modelcontextprotocol.org
Demo
Planning for skiweekend
The agent has built-in python code for weather forecast service for US.
I asked where to go skiing around Denver next weekend and did some follow-up calls to get exact weather conditions and a comparison between ski resorts demonstrating iterative tool calling until the weather forecast for each ski resort is obtained and recommendations made.

Installation
If you are unfamiliar with the hackathon agents, follow the detailed installation instructions provided for the hackathon.
Live Agent Studio Hackaton Readme
Specifics for the MCP OpenAI agent
This agent has uses or has been tested with:
- Python 3.12
- pip
- Supabase
- OpenAI API Key, or Deepseek API key, or no key if using Ollama
- Appropriate runtime environment for the mcp server commands you want to use such as
- Docker
- Node / npx
- Uv / uvx
- Python3
Note:
Anthropic and Docker have published servers that are easy to install.
PulseMCP website has the most comprehensive collection of MCP servers.
.env
Rename the env example file
mv .env.example .env
Use your favourite editor (mine is micro - like nano but with mouse support and sensible keybindings) and update .env
mcp_config.json default setup
The mcp_config.json file contains the default configurations for MCP servers. Each server configuration includes the command to execute, arguments, and optional environment variables. This file is essential for the MCP agent to know how to interact with different servers.
An enable option has been added to be more flexible once running.
Example default setup:
{
"memory": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp/memory"
],
"env": {},
"enable": true
},
"compute": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp/compute"
],
"env": {},
"enable": false
}
}
Usage
Run the agent
This launches the agent ready to be called by oTTomator chatbot
uvicorn thirdbrain-mcp-openai-agent:app --host 0.0.0.0 --port 8001
You can check if the agent is running by navigating with your browser to
http://localhost:8001/api/thirdbrain-hello
Run standalone MCP Client from CLI
With the standalone mcp_client file, you can test the workings of the MCP client without oTTomator.
They each have their own specific startup code, but the core library is the same. mcp_client.py functions are called from the agent.
python mcp_client.py
Slash commands within the chatbot (agent and cli)
The management of the mcp_config.json file is done through slash commands. We want this to be really deterministic when managing the configuration.

/list
Lists all enabled and disabled mcp servers known on the system
/functions server-name
Lists all the functions with their signature for the particular mcp server
/addMcpServer {servername: { command: X , args: [], env: {} }}
This command allows you to add a new MCP server configuration. The configuration should be in valid JSON format and will be saved in the mcp_config.json file. The command specifies the executable to run, args is an array of arguments for the command, and env is an optional object for environment variables.
Example:
/addMcpServer {"memory": {"command": "docker", "args": ["run", "-i", "--rm", "mcp/memory"]}}
Note beginning and ending curly braces. Typically 2 unless you also have a “env”: {} section
/disable [server-name, …]
This command disables one or more specified MCP servers. Provide the server names as a comma-separated list.
Example:
/disable server1, server2
/enable [server-name, …]
This command enables one or more specified MCP servers. Provide the server names as a comma-separated list.
Example:
/enable server1, server2
/dropMcpServer [server-name, …]
This command removes an existing MCP server configuration. Provide the server name you wish to remove.
Example:
/dropMcpServer server1
Verified working mcp configurations
Official
- No issues found with the @modelcontextprotocol.
- They appear now to have both native (npv/uvx) and docker installation instructions
Popular:
- Fetch: getting website info
/addMcpServer { "fetch": { "command": "docker", "args": ["run", "-i", "--rm", "mcp/fetch"] } }
Unofficial / Community Released
aidd-ai-software-development-utilities
Good comprehensive readme at
Github repo: skydeckai/mcp-server-aidd
/addMcpServer { "aidd-ai-software-development-utilities": { "command": "uvx", "args": ["mcp-server-aidd"] } }
pandoc
Pandoc is a well known packages used for all kinds of file conversions and has many advanced features like Word templating, but not sure if everything is supported in the mcp server implementation.
/addMcpServer { "mcp-pandoc": { "command": "uvx", "args": [ "mcp-pandoc" ] } }
FAQ
- Why not use pydantic-ai ? : See [#Known issues] section
- Why SELECTED in .env ?: This gives more flexibility to test and support additional OpenAI compatible models without needing code changes
- Will you use pydantic-ai in the future ?: Looking at the discussions for the Agent rewrite, depending on how strict the response validations are, this might be feasible or not. A lot of functionality is hidden under the Pydantic-AI hood.
- Why no setup for openrouter and others ?: spent too much time trying to figure out why Pydantic AI wouldn’t work with OpenAI that I did not test with extra intermediaries. Only when the OpenAI worked well, i started testing with deepseek-chat for real cheap solution. If other models are truely openai compatible than it should/would work
Known issues
Bug in Pydantic-AI packages
This agent is based upon the code in https://github.com/p2c2e/mcp_proxy_pydantic_agent
and the exellent wip found at https://github.com/philschmid/mcp-openai-gemini-llama-example
It is currently impossible to use the pydantic-ai agent system several issues exist, these are the 2 most relevant:
-
It is recognized as a bug still as of end of januari 2025
-
It is being worked on in major Agent rewrite as of 28 jan 25
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










