- Explore MCP Servers
- thinking_models_mcp
Thinking Models Mcp
What is Thinking Models Mcp
thinking_models_mcp is a powerful tool that integrates hundreds of thinking models, frameworks, and methodologies to help users think about problems more systematically and comprehensively. Through the MCP (Model Context Protocol) interface, AI assistants can access these thinking tools and seamlessly apply structured thinking methods in conversations.
Use cases
Use cases for thinking_models_mcp include decision-making in uncertain environments, developing strategic business plans, analyzing risks in projects, and enhancing critical thinking skills in educational settings.
How to use
To use thinking_models_mcp, you need to clone the repository, install the necessary packages, and start the server. Once the server is running, you can send requests through an MCP client to access the thinking model tools, such as recommending models based on problem keywords.
Key features
Key features of thinking_models_mcp include a rich library of thinking models covering decision theory, systems thinking, and probabilistic thinking; intelligent model recommendations based on problem characteristics; an interactive reasoning process that guides users through structured thinking; a learning and adaptation system that continuously improves recommendation algorithms based on user feedback; and the ability to create and combine models for innovative thinking frameworks.
Where to use
thinking_models_mcp can be used in various fields such as education, business strategy, risk management, and personal development, where structured thinking and problem-solving are essential.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Thinking Models Mcp
thinking_models_mcp is a powerful tool that integrates hundreds of thinking models, frameworks, and methodologies to help users think about problems more systematically and comprehensively. Through the MCP (Model Context Protocol) interface, AI assistants can access these thinking tools and seamlessly apply structured thinking methods in conversations.
Use cases
Use cases for thinking_models_mcp include decision-making in uncertain environments, developing strategic business plans, analyzing risks in projects, and enhancing critical thinking skills in educational settings.
How to use
To use thinking_models_mcp, you need to clone the repository, install the necessary packages, and start the server. Once the server is running, you can send requests through an MCP client to access the thinking model tools, such as recommending models based on problem keywords.
Key features
Key features of thinking_models_mcp include a rich library of thinking models covering decision theory, systems thinking, and probabilistic thinking; intelligent model recommendations based on problem characteristics; an interactive reasoning process that guides users through structured thinking; a learning and adaptation system that continuously improves recommendation algorithms based on user feedback; and the ability to create and combine models for innovative thinking frameworks.
Where to use
thinking_models_mcp can be used in various fields such as education, business strategy, risk management, and personal development, where structured thinking and problem-solving are essential.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
“Tianji” — Thinking Models MCP Server

Toolbox for intelligent thinking: Integrating systematic thinking methods into your problem-solving process
Table of Contents
- What is “Tianji”?
- Core Features
- Tools Overview
- Tool Function Parameters and Return Values
- Use Cases
- Quick Start
- Configuration Guide
- Developer Documentation
- License
What is “Tianji”?
“Tianji” is a powerful thinking model MCP server that integrates hundreds of thinking models, frameworks, and methodologies to help users think more systematically and comprehensively about problems. Through the MCP (Model Context Protocol) interface, AI assistants can access these thinking tools and seamlessly apply structured thinking methods to conversations. The name “Tianji” originates from the ancient Chinese saying “Heaven’s secrets must not be revealed,” implying that it helps users uncover deeper patterns of thinking and wisdom.
Core Features
- Rich Library of Thinking Models: Contains classic thinking models across multiple domains including decision theory, systems thinking, and probabilistic thinking
- Intelligent Model Recommendations: Automatically recommends the most suitable thinking models based on problem characteristics
- Interactive Reasoning Process: Guides users through structured thinking, analyzing problems step by step
- Learning and Adaptation System: Continuously improves recommendation algorithms through user feedback
- Model Creation and Combination: Allows creation of new models or combination of existing models to generate innovative thinking frameworks
Tools Overview
Exploration Tools
- list-models: List all thinking models or filter by category
- search-models: Search thinking models by keywords
- get-categories: Get all thinking model categories
- get-model-info: Get detailed information about a thinking model
- get-related-models: Get other models related to a specific model
Problem-Solving Tools
- recommend-models-for-problem: Recommend suitable thinking models based on problem keywords
- interactive-reasoning: Interactive reasoning process guidance
- generate-validate-hypotheses: Generate multiple hypotheses for a problem and provide validation methods
- explain-reasoning-process: Explain the reasoning process of a model and the thinking patterns applied
Creation Tools
- create-thinking-model: Create a new thinking model
- update-thinking-model: Update any field of an existing thinking model, including basic information and visualization data, without recreating the entire model
- emergent-model-design: Create new thinking models by combining existing ones
- delete-thinking-model: Delete unwanted thinking models
System and Learning Tools
- get-started-guide: Beginner’s guide
- get-server-version: Get server version information
- count-models: Count the total number of current thinking models
- record-user-feedback: Record user feedback on thinking model experiences
- detect-knowledge-gap: Detect knowledge gaps in user queries
- get-model-usage-stats: Get usage statistics for thinking models
- analyze-learning-system: Analyze the status of the thinking model learning system
Tool Function Parameters and Return Values
Below are the detailed parameters and return values for all tool functions:
Exploration Tools
list-models
Lists all thinking models or filters by category.
Parameters:
lang(required, default “zh”): Language code, options: [“zh”, “en”]category(optional): Main category namesubcategory(optional): Subcategory name (requires main category to be provided)limit(optional, default 100): Limit on the number of results returned
Return Value:
search-models
Search thinking models by keywords.
Parameters:
query(required): Search keywordslang(required, default “zh”): Language code, options: [“zh”, “en”]limit(optional, default 10): Limit on the number of results returned
Return Value:
get-categories
Get all thinking model categories.
Parameters:
lang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
get-model-info
Get detailed information about a thinking model.
Parameters:
model_id(required): Unique ID of the thinking modelfields(optional, default [“basic”]): Fields to return, options: [“all”, “basic”, “detail”, “teaching”, “warnings”, “visualizations”]lang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
get-related-models
Get models related to a specific model.
Parameters:
model_id(required): Unique ID of the thinking modellang(required, default “zh”): Language code, options: [“zh”, “en”]limit(optional, default 5): Limit on the number of results returneduse_enhanced_similarity(optional, default true): Whether to use enhanced similarity assessment
Return Value:
Problem-Solving Tools
recommend-models-for-problem
Recommend suitable thinking models based on problem keywords.
Parameters:
problem_keywords(required): Array of problem keywordsproblem_context(optional): Complete context description of the problemlang(required, default “zh”): Language code, options: [“zh”, “en”]limit(optional, default 10): Limit on the number of results returneduse_learning_adjustment(optional, default true): Whether to use the learning system to adjust recommendations
Return Value:
interactive-reasoning
Provide interactive reasoning process guidance.
Parameters:
initialContext(required): Initial problem or situation descriptionreasoningStage(required): Current reasoning stage, options: [“information_gathering”, “hypothesis_generation”, “hypothesis_testing”, “conclusion”]currentPathId(optional): Current reasoning path ID (if in an existing reasoning)requiredInformation(optional): Array of additional information neededlang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
generate-validate-hypotheses
Generate multiple hypotheses for a problem and provide validation methods.
Parameters:
problem(required): Problem to solvecontext(required): Background information related to the problemlang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
explain-reasoning-process
Explain the reasoning process of a model and the thinking patterns applied.
Parameters:
problemDescription(required): Problem or situation descriptionreasoningSteps(required): Array of reasoning step details, each step includes:description(required): Reasoning step descriptionmodelIds(optional): Array of thinking model IDs usedevidence(optional): Array of supporting evidenceconfidence(optional, default 0.8): Confidence level (0-1)
conclusion(required): Reasoning conclusionlang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
Creation Tools
create-thinking-model
Create a new thinking model.
Parameters:
id(required): Unique identifier for the modelname(required): Name of the modeldefinition(required): Concise definition of the modelpurpose(required): Main purpose and usage scenarios of the modelcategory(required): Main category of the modellang(required, default “zh”): Language of the model, options: [“zh”, “en”]subcategories(optional): List of subcategories for the modeltags(optional): Related tags for the modelauthor(optional): Model authorsource(optional): Model sourceprompt(optional): Detailed prompt/role-playing guideexample(optional): Brief example of model usageuse_cases(optional): Use cases for the modelinteraction(optional): Guide for interacting with users using this modelconstraints(optional): Constraints for using this modelpopular_science_teaching(optional): Popular science teaching for the modellimitations(optional): Limitations of the modelcommon_pitfalls(optional): Common pitfalls when using the modelcommon_problems_solved(optional): Common problems solved by the model- Various visualization data (optional): Flowcharts, tables, bar charts, lists, etc.
Return Value:
{
"status": "operation status",
"message": "operation message",
"model_id": "created model ID"
}
update-thinking-model
Update any field of an existing thinking model.
Parameters:
model_id(required): ID of the model to update- Other fields to update (optional): Same as create-thinking-model parameters
Return Value:
{
"status": "operation status",
"message": "operation message",
"updated_fields": [
"updated field 1",
"updated field 2"
]
}
emergent-model-design
Create a new thinking model by combining existing thinking models.
Parameters:
source_model_ids(required): List of source model IDs for combination, minimum 2, maximum 10target_model_id(required): Unique identifier for the new modeltarget_model_name(required): Name of the new modeldesign_goal(required): Design goal and purpose descriptionconnection_description(optional): Description of how source models are combinedcategory(optional): Main category of the new modellang(required, default “zh”): Language of the model, options: [“zh”, “en”]
Return Value:
delete-thinking-model
Delete unwanted thinking models.
Parameters:
model_id(required): ID of the model to deletelang(required, default “zh”): Language code, options: [“zh”, “en”]confirm(required): Confirm deletion, must be true
Return Value:
{
"status": "operation status",
"message": "operation message",
"deleted_model_id": "deleted model ID"
}
System and Learning Tools
get-started-guide
Get beginner’s guide.
Parameters:
user_objective(optional, default “explore”): User objective, options: [“explore”, “solve_problem”, “create_model”, “learn_tools”]expertise_level(optional, default “beginner”): User experience level, options: [“beginner”, “intermediate”, “advanced”]lang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
get-server-version
Get server version and status information.
Parameters:
- None
Return Value:
count-models
Count the total number of current thinking models.
Parameters:
lang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
record-user-feedback
Record user feedback on thinking model experiences.
Parameters:
modelIds(required): Array of IDs for relevant thinking modelscontext(required): Context or problem description where the model was appliedfeedbackType(required): Feedback type, options: [“helpful”, “not_helpful”, “incorrect”, “insightful”, “confusing”]comment(optional): Detailed feedback explanation or commentapplicationResult(optional): Description of model application resultsuggestedImprovements(optional): Array of suggested improvementslang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
{
"status": "operation status",
"message": "operation message",
"insights": "insights gained from the feedback"
}
detect-knowledge-gap
Detect knowledge gaps in user queries.
Parameters:
query(required): User query or questionmatchThreshold(optional, default 0.5): Match threshold, values below this are considered knowledge gapslang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
get-model-usage-stats
Get usage statistics for thinking models.
Parameters:
modelId(required): Unique ID of the thinking modellang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
analyze-learning-system
Analyze the status of the thinking model learning system.
Parameters:
lang(required, default “zh”): Language code, options: [“zh”, “en”]
Return Value:
Use Cases
Solving Complex Problems
When facing complex problems, the system can recommend multiple thinking models to help you analyze problems from different angles and avoid mental blind spots.
Improving Thinking Quality
Through structured thinking processes, avoid common cognitive biases and make more rational decisions.
Learning Thinking Models
The system not only provides definitions of thinking models but also includes detailed teaching content, application examples, and notes to help you master various thinking tools.
Creating Custom Models
When existing models cannot meet needs, you can create new thinking models or combine existing models to create innovative thinking frameworks.
Quick Start
Installation (Local Development)
If you want to run and develop this project locally:
git clone https://github.com/yourusername/thinking-models-mcp.git # Replace with your repository address
cd thinking_models_mcp
npm install
npm run build
Starting the Server (Local Development)
- Normal Start (stdio mode)
In the project root directory, run:
Or use the npm script (if configured in package.json):node build/thinking_models_server.jsnpm run start
Configuration Guide
You can integrate the thinking models MCP server into any client that supports the MCP protocol. Here are two different implementation methods:
Method 1: Running the “Tianji” Server with Local Node.js
This method requires you to install and configure the “Tianji” server code locally and is suitable for scenarios where you need to customize development or modify server code.
Method 2: Using NPX to Start the Server from a Remote npm Package
This method is simpler, doesn’t require local installation of the full code, and directly pulls and runs packages from the npm repository.
Command Line Arguments Explanation:
node: Directly use local Node.js to run JavaScript filesnpx: npm package runner, allows executing commands in npm packages without global or local installation--no-cache: Disable cache, ensure getting the latest version of the package each time, avoiding outdated versions
Important Note: Server data will be automatically saved in the
datafolder of the installation directory. Even if the service restarts, previous data will be retained without additional configuration. It is recommended to use the--no-cacheparameter to ensure getting the latest version each time, avoiding using outdated features due to cache issues.
Basic Usage
After the “Tianji” server starts, you can send requests to access thinking model tools through the MCP client. For example:
If configured correctly, the client should be able to call the server and return results.
Developer Documentation
This section is for developers who want to understand, customize, or extend the thinking model MCP server.
Development Environment Setup
Environment Requirements
- Node.js >= 18.0.0
- npm >= 8.0.0 (or compatible package managers like yarn, pnpm)
- TypeScript 5.x
Installing Dependencies (Local Development)
# Assuming you've already cloned the repository and entered the project directory
npm install
Development Mode (Local Development)
# Watch mode, automatically recompile TypeScript files when changed
npm run watch
# In another terminal, start the development server (usually runs compiled files from the build directory)
# You might need a tool like nodemon to automatically restart the server
npm run start:dev # (assuming you've configured this script in package.json)
Code Architecture
File Structure (Example)
thinking_models_mcp/ ├── build/ # Compiled JavaScript files ├── src/ # TypeScript source code │ ├── thinking_models_server.ts # Main server logic and tool registration │ ├── types.ts # TypeScript type definitions │ ├── utils.ts # Common utility functions │ ├── similarity_engine.ts # Similarity calculation logic │ ├── reasoning_process.ts # Reasoning process management │ ├── learning_capability.ts # Learning system functionality │ ├── recommendations.ts # Model recommendation logic │ └── response_types.ts # API response type definitions ├── thinking_models_db/ # Thinking model database │ ├── zh/ # Chinese models (JSON files) │ └── en/ # English models (JSON files) ├── package.json # Project dependencies and scripts ├── tsconfig.json # TypeScript compiler configuration └── README.md # This document
Core Modules
-
Server Core (thinking_models_server.ts)
- Initializes the MCP server instance (
McpServerfrom@modelcontextprotocol/sdk) - Registers all available tools, defines their parameter schemas (using
zod) and handler functions - Loads and manages thinking model data
- Processes client requests and routes them to the appropriate tools
- Initializes the MCP server instance (
-
Thinking Model Types (
types.ts)- Defines the core
ThinkingModelinterface, describing the model’s data structure - Other model-related TypeScript types and interfaces
- Defines the core
-
Similarity Calculation Engine (
similarity_engine.ts)calculateQueryMatch: Calculates the match between user queries and thinking modelscalculateKeywordRelevance: Calculates the relevance of keyword lists to thinking models
-
Reasoning Process Management (
reasoning_process.ts)- Used for building, managing, and visualizing structured reasoning paths
-
Learning System (
learning_capability.ts)recordUserFeedback: Records user feedback on model usagedetectKnowledgeGap: Detects knowledge gaps based on user queries and feedbackadjustModelRecommendations: Adjusts model recommendations based on learning data
API Documentation
Server API
Server communication modes:
- stdio API
- Communicates with clients through standard input/output.
- Follows MCP protocol specifications.
- Usually managed automatically by clients (such as Cursor, Claude Desktop).
Tool API
Each tool is registered using the server.tool() method, containing:
- Tool Name (string): The name used by clients when calling.
- Tool Description (string): A brief description of the tool’s functionality.
- Parameter Schema (Zod object): Uses the
zodlibrary to define the parameters accepted by the tool and their types, descriptions, and constraints. - Handler Function (async function): Receives validated parameter objects, executes tool logic, and returns responses that comply with the MCP protocol.
Tool Registration Example
// filepath: src/thinking_models_server.ts
// ... imports ...
server.tool(
"get-model-count-by-category", // Tool name
"Get the number of thinking models in a specified category", // Tool description
{ // Parameter schema (Zod schema)
category: z.string().describe("Main category of thinking models to query"),
lang: z.enum(["zh", "en"] as const).default("zh").describe("Language code ('zh' or 'en')")
},
async ({ category, lang }) => { // Handler function
try {
const modelsInBuffer = MODELS[lang] || []; // MODELS is a cache of loaded models
const count = modelsInBuffer.filter(m => m.category === category).length;
log(`Tool 'get-model-count-by-category' called: category=${category}, lang=${lang}, count=${count}`);
return {
content: [{
type: "text",
text: JSON.stringify({ category, lang, count }, null, 2)
}]
};
} catch (error: any) {
log(`Tool 'get-model-count-by-category' execution error: ${error.message}`);
return {
content: [{
type: "text",
text: JSON.stringify({ error: "Failed to get model count", message: error.message }, null, 2)
}]
};
}
}
);
Extension Guidelines
Adding New Tools
- In thinking_models_server.ts (or related module files), register your new tool using the
server.tool()method, as shown in the example above. - Define clear parameter schemas and descriptions.
- Implement the tool’s handler function, ensuring it includes error handling and logging.
- Recompile the project (
npm run build).
Creating New Thinking Models
- Create a new
.jsonfile in the zh (Chinese) or en (English) directory. - The filename is typically the model’s ID (for example,
new_decision_matrix.json). - The file content should conform to the structure of the
ThinkingModelinterface (defined in types.ts). Example: - The server will automatically load the new model when it starts, or if file monitoring is enabled, it will also reload when the file is saved.
Modifying Recommendation Algorithms
Recommendation logic is mainly located in similarity_engine.ts and recommendations.ts.
similarity_engine.ts: Contains core algorithms for calculating text similarity and keyword relevance. You can adjust these algorithms’ weights, techniques used (such as TF-IDF, embedding vectors, etc.) to improve matching precision.recommendations.ts: Contains functions likegetModelRecommendationsthat use the similarity engine’s results to generate the final model recommendation list. You can modify the logic here, such as how to combine scores from different sources or how to adjust recommendations based on context.
Testing
The project typically uses testing frameworks like Jest.
Writing Tests
Create test files for your modules or functions in the tests directory (for example, tests/my_tool.test.ts).
// tests/example_tool.test.ts
import { server, loadModels } from '../src/thinking_models_server'; // Assuming the server instance is exported
import { ThinkingModel } from '../src/types';
// Mock MCP client request
async function callTool(toolName: string, params: any) {
const toolDefinition = server.capabilities.tools[toolName];
if (!toolDefinition || !toolDefinition.execute) {
throw new Error(`Tool ${toolName} not found or not executable`);
}
// Actual testing might need more complex mocking to match the MCP SDK context
return toolDefinition.execute(params, {} as any);
}
describe('My Custom Tool Tests', () => {
beforeAll(async () => {
// Load test model data (if needed)
await loadModels('zh'); // Load Chinese models
});
test('get-model-count-by-category should return correct count', async () => {
const response = await callTool('get-model-count-by-category', { category: 'Decision Making', lang: 'zh' });
const result = JSON.parse(response.content[0].text);
expect(result.category).toBe('Decision Making');
expect(result.count).toBeGreaterThanOrEqual(0); // Specific count depends on your test data
});
});
Running Tests
Configure the test script in package.json:
{
"scripts": {
"test": "jest"
}
}
Then run:
npm test
Build and Deployment
Building the Project
npm run build
This will use tsc (TypeScript compiler) to compile .ts files from the src directory into JavaScript files in the build directory.
Deployment Options
Deploy as a standalone Node.js server
- Copy the entire project (or at least the
builddirectory, node_modules, package.json, and thinking_models_db) to the server - Run the server:
node build/thinking_models_server.js - Or use process managers like
pm2to keep the server running:npm install -g pm2 pm2 start build/thinking_models_server.js --name "thinking-models-mcp" pm2 save
Coding Standards
Coding Style
- Follow a consistent coding style (for example, using Prettier and ESLint).
- Use TypeScript’s strong typing features, avoid using
anyunless absolutely necessary. - Write clear, self-explanatory code, and add comments for complex logic.
Naming Conventions
- Functions and variables:
camelCase(e.g.,calculateSimilarity) - Classes and interfaces:
PascalCase(e.g.,ThinkingModel,McpServer) - Constants:
UPPER_SNAKE_CASE(e.g.,DEFAULT_PORT) - File names:
snake_case.tsorkebab-case.ts(maintain consistency within the project)
Documentation Standards
- Write JSDoc/TSDoc comments for all public APIs (functions, classes, interfaces).
- Clearly explain the project’s functionality and usage in README and other documentation.
- Keep documentation in sync with the code.
Common Development Issues and Troubleshooting
1. Model Files Not Loaded or Loading Errors
- Check Paths: Ensure paths defined in
SUPPORTED_LANGUAGESare correct relative to the compiledthinking_models_server.jsfile. - JSON Format: Ensure all model
.jsonfiles are valid JSON and conform to the structure of theThinkingModelinterface. - File Permissions: Ensure the server process has permissions to read the model directory and files.
- Logs: View server startup logs, which usually include error information when loading models.
2. API Request Failure or Tool Not Found
- Server Running Status: Confirm the server has started successfully without errors.
- Tool Names: Confirm the tool name called by the client exactly matches the name registered in the server (case-sensitive).
- Parameter Format: Ensure parameters sent to the tool comply with its Zod schema definition.
3. Inaccurate Similarity Calculation or Recommendations
- Model Data Quality: Fields like
definition,purpose,tags,keywordsare crucial for similarity calculation. Ensure these fields are rich and accurate. - Algorithm Adjustment: You might need to adjust algorithm parameters or weights in
similarity_engine.ts. - Learning System: If the learning system is enabled, check if feedback data is correctly recorded and applied.
Best Practices
- Logging: Use the
log()function (or more sophisticated logging libraries) to record key operations, errors, and debugging information. - Error Handling: Implement robust error handling in all tool functions and asynchronous operations, and return meaningful error messages to clients.
- Modularity: Organize different functionalities (such as similarity calculation, learning systems, tool implementation) into separate modules.
- Configuration Management: Use environment variables or configuration files for configurable items such as ports and paths.
License
This project is open-sourced under the MIT License.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










