- Explore MCP Servers
- langchain-mcp-tools-ts
Langchain Mcp Tools Ts
What is Langchain Mcp Tools Ts
langchain-mcp-tools-ts is a TypeScript package designed to simplify the integration of Model Context Protocol (MCP) server tools with LangChain, enabling users to leverage over 1500 MCP servers seamlessly.
Use cases
Use cases for langchain-mcp-tools-ts include integrating various external tools like Google Drive, Slack, and PostgreSQL into LangChain applications, enhancing the capabilities of language models by utilizing external resources.
How to use
To use langchain-mcp-tools-ts, install it via npm with ‘npm i @h1deya/langchain-mcp-tools’. Then, configure MCP servers as a JavaScript object and call the ‘convertMcpToLangchainTools()’ function to initialize the servers and obtain LangChain-compatible tools.
Key features
Key features include the ability to initialize multiple MCP servers in parallel, convert their tools into LangChain-compatible formats, and provide a cleanup function for managing server sessions.
Where to use
undefined
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Langchain Mcp Tools Ts
langchain-mcp-tools-ts is a TypeScript package designed to simplify the integration of Model Context Protocol (MCP) server tools with LangChain, enabling users to leverage over 1500 MCP servers seamlessly.
Use cases
Use cases for langchain-mcp-tools-ts include integrating various external tools like Google Drive, Slack, and PostgreSQL into LangChain applications, enhancing the capabilities of language models by utilizing external resources.
How to use
To use langchain-mcp-tools-ts, install it via npm with ‘npm i @h1deya/langchain-mcp-tools’. Then, configure MCP servers as a JavaScript object and call the ‘convertMcpToLangchainTools()’ function to initialize the servers and obtain LangChain-compatible tools.
Key features
Key features include the ability to initialize multiple MCP servers in parallel, convert their tools into LangChain-compatible formats, and provide a cleanup function for managing server sessions.
Where to use
undefined
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
MCP To LangChain Tools Conversion Utility / TypeScript

This is a simple, lightweight library intended to simplify the use of
Model Context Protocol (MCP)
server tools with LangChain.
Its simplicity and extra features for stdio MCP servers can make it useful as a basis for your own customizations.
However, it only supports text results of tool calls and does not support MCP features other than tools.
LangChain’s official LangChain.js MCP Adapters library,
which supports comprehensive integration with LangChain, has been released at:
- npmjs: https://www.npmjs.com/package/@langchain/mcp-adapters
- github: https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-mcp-adapters`
You may want to consider using the above if you don’t have specific needs for this library.
Introduction
This package is intended to simplify the use of
Model Context Protocol (MCP)
server tools with LangChain / TypeScript.
Model Context Protocol (MCP) is the de facto industry standard
that dramatically expands the scope of LLMs by enabling the integration of external tools and resources,
including DBs, GitHub, Google Drive, Docker, Slack, Notion, Spotify, and more.
There are quite a few useful MCP servers already available:
- MCP Server Listing on the Official Site
- MCP.so - Find Awesome MCP Servers and Clients
- Smithery: MCP Server Registry
This utility’s goal is to make these massive numbers of MCP servers easily accessible from LangChain.
It contains a utility function convertMcpToLangchainTools().
This async function handles parallel initialization of specified multiple MCP servers
and converts their available tools into an array of LangChain-compatible tools.
For detailed information on how to use this library, please refer to the following document:
A python equivalent of this utility is available
here
Prerequisites
- Node.js 16+
Installation
npm i @h1deya/langchain-mcp-tools
API docs
Can be found here
Quick Start
A minimal but complete working usage example can be found
in this example in the langchain-mcp-tools-ts-usage repo
convertMcpToLangchainTools() utility function accepts MCP server configurations
that follow the same structure as
Claude for Desktop,
but only the contents of the mcpServers property,
and is expressed as a JS Object, e.g.:
const mcpServers: McpServersConfig = {
filesystem: {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
},
fetch: {
command: "uvx",
args: ["mcp-server-fetch"]
},
github: {
type: "http",
url: "https://api.githubcopilot.com/mcp/",
headers: {
"Authorization": `Bearer ${process.env.GITHUB_PERSONAL_ACCESS_TOKEN}`
}
},
};
const { tools, cleanup } = await convertMcpToLangchainTools(mcpServers);
This utility function initializes all specified MCP servers in parallel,
and returns LangChain Tools
(tools: StructuredTool[])
by gathering available MCP tools from the servers,
and by wrapping them into LangChain tools.
It also returns an async callback function (cleanup: McpServerCleanupFn)
to be invoked to close all MCP server sessions when finished.
The returned tools can be used with LangChain, e.g.:
// import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({ model: "claude-sonnet-4-0" });
// import { createReactAgent } from "@langchain/langgraph/prebuilt";
const agent = createReactAgent({
llm,
tools
});
For hands-on experimentation with MCP server integration,
try this LangChain application built with the utility
For detailed information on how to use this library, please refer to the following document:
“Supercharging LangChain: Integrating 2000+ MCP with ReAct”
MCP Protocol Support
This library supports MCP Protocol version 2025-03-26 and maintains backwards compatibility with version 2024-11-05.
It follows the official MCP specification for transport selection and backwards compatibility.
Features
stderr Redirection for Local MCP Server
A new key "stderr" has been introduced to specify a file descriptor
to which local (stdio) MCP server’s stderr is redirected.
The key name stderr is derived from
TypeScript SDK’s StdioServerParameters.
const logPath = `mcp-server-${serverName}.log`;
const logFd = fs.openSync(logPath, "w");
mcpServers[serverName].stderr = logFd;
A usage example can be found [here](https://raw.githubusercontent.com/hideya/langchain-mcp-tools-ts/master/
https://github.com/hideya/langchain-mcp-tools-ts-usage/blob/694b877ed5336bfcd5274d95d3f6d14bed0937a6/src/index.ts#L72-L83)
Working Directory Configuration for Local MCP Servers
The working directory that is used when spawning a local (stdio) MCP server
can be specified with the "cwd" key as follows:
"local-server-name": {
command: "...",
args: [...],
cwd: "/working/directory" // the working dir to be use by the server
},
The key name cwd is derived from
TypeScript SDK’s StdioServerParameters.
Note: The library automatically adds the PATH environment variable to stdio servers if not explicitly provided to ensure servers can find required executables.
Transport Selection Priority
The library selects transports using the following priority order:
- Explicit transport/type field (must match URL protocol if URL provided)
- URL protocol auto-detection (http/https → StreamableHTTP → SSE, ws/wss → WebSocket)
- Command presence → Stdio transport
- Error if none of the above match
This ensures predictable behavior while allowing flexibility for different deployment scenarios.
Remote MCP Server Support
mcp_servers configuration for Streamable HTTP, SSE and Websocket servers are as follows:
// Auto-detection: tries Streamable HTTP first, falls back to SSE on 4xx errors
"auto-detect-server": {
url: `http://${server_host}:${server_port}/...`
},
// Explicit Streamable HTTP
"streamable-http-server": {
url: `http://${server_host}:${server_port}/...`,
transport: "streamable_http"
// type: "http" // VSCode-style config also works instead of the above
},
// Explicit SSE
"sse-server-name": {
url: `http://${sse_server_host}:${sse_server_port}/...`,
transport: "sse" // or `type: "sse"`
},
// WebSocket
"ws-server-name": {
url: `ws://${ws_server_host}:${ws_server_port}/...`
// optionally `transport: "ws"` or `type: "ws"`
},
For the convenience of adding authorization headers, the following shorthand expression is supported.
This header configuration will be overridden if either streamableHTTPOptions or sseOptions is specified (details below).
github: {
// To avoid auto protocol fallback, specify the protocol explicitly when using authentication
type: "http", // or `transport: "http",`
url: "https://api.githubcopilot.com/mcp/",
headers: {
"Authorization": `Bearer ${process.env.GITHUB_PERSONAL_ACCESS_TOKEN}`
}
},
NOTE: When accessing the GitHub MCP server, GitHub PAT (Personal Access Token)
alone is not enough; your GitHub account must have an active Copilot subscription or be assigned a Copilot license through your organization.
Auto-detection behavior (default):
- For HTTP/HTTPS URLs without explicit
transport, the library follows MCP specification recommendations - First attempts Streamable HTTP transport
- If Streamable HTTP fails with a 4xx error, automatically falls back to SSE transport
- Non-4xx errors (network issues, etc.) are re-thrown without fallback
Explicit transport selection:
- Set
transport: "streamable_http"(or VSCode-style configtype: "http") to force Streamable HTTP (no fallback) - Set
transport: "sse"to force SSE transport - WebSocket URLs (
ws://orwss://) always use WebSocket transport
Streamable HTTP is the modern MCP transport that replaces the older HTTP+SSE transport. According to the official MCP documentation: “SSE as a standalone transport is deprecated as of protocol version 2025-03-26. It has been replaced by Streamable HTTP, which incorporates SSE as an optional streaming mechanism.”
Authentication Support for Streamable HTTP Connections
The library supports OAuth 2.1 authentication for Streamable HTTP connections:
import { OAuthClientProvider } from '@modelcontextprotocol/sdk/client/auth.js';
// Implement your own OAuth client provider
class MyOAuthProvider implements OAuthClientProvider {
// Implementation details...
}
const mcpServers = {
"secure-streamable-server": {
url: "https://secure-mcp-server.example.com/mcp",
transport: "streamable_http", // Optional: explicit transport
streamableHTTPOptions: {
// Provide an OAuth client provider
authProvider: new MyOAuthProvider(),
// Optionally customize HTTP requests
requestInit: {
headers: {
'X-Custom-Header': 'custom-value'
}
},
// Optionally configure reconnection behavior
reconnectionOptions: {
maxReconnectAttempts: 5,
reconnectDelay: 1000
}
}
}
};
Test implementations are provided:
- Streamable HTTP Authentication Tests:
- MCP client uses this library: streamable-http-auth-test-client.ts
- Test MCP Server: streamable-http-auth-test-server.ts
Authentication Support for SSE Connections (Legacy)
The library also supports authentication for SSE connections to MCP servers.
Note that SSE transport is deprecated; Streamable HTTP is the recommended approach.
To enable authentication, provide SSE options in your server configuration:
import { OAuthClientProvider } from '@modelcontextprotocol/sdk/client/auth.js';
// Implement your own OAuth client provider
class MyOAuthProvider implements OAuthClientProvider {
// Implementation details...
}
const mcpServers = {
"secure-server": {
url: "https://secure-mcp-server.example.com",
sseOptions: {
// Provide an OAuth client provider
authProvider: new MyOAuthProvider(),
// Optionally customize the initial SSE request
eventSourceInit: {
// Custom options
},
// Optionally customize recurring POST requests
requestInit: {
headers: {
'X-Custom-Header': 'custom-value'
}
}
}
}
};
Test implementations are provided:
- SSE Authentication Tests:
- MCP client uses this library: sse-auth-test-client.ts
- Test MCP Server: sse-auth-test-server.ts
Limitations
- Tool Return Types: Currently, only text results of tool calls are supported.
The library uses LangChain’sresponse_format: 'content'(the default), which only supports text strings.
While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content. - MCP Features: Only MCP Tools are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.
Change Log
Can be found here
Appendix
Troubleshooting
Common Configuration Errors
McpInitializationError: Cannot specify both ‘command’ and ‘url’
- Remove either the
commandfield (for URL-based servers) or theurlfield (for local stdio servers) - Use
commandfor local MCP servers,urlfor remote servers
McpInitializationError: URL protocol to be http: or https:
- Check that your URL starts with
http://orhttps://when using HTTP transport - For WebSocket servers, use
ws://orwss://URLs
McpInitializationError: command to be specified
- Add a
commandfield when using stdio transport - Ensure the command path is correct and the executable exists
Transport Detection Issues
Transport detection failed
- Server may not support the MCP protocol correctly
- Try specifying an explicit transport type (
transport: "streamable_http"ortransport: "sse") - Check server documentation for supported transport types
Connection timeout or network errors
- Verify the server URL and port are correct
- Check that the server is running and accessible
- Ensure firewall/network settings allow the connection
Tool Execution Problems
Schema sanitization warnings for Gemini compatibility
- These are informational and generally safe to ignore
- Consider updating the MCP server to use Gemini-compatible schemas
- Warnings help identify servers that may need upstream fixes
Tool calls returning empty results
- Check server logs (use
stderrredirection to capture them) - Verify tool parameters match the expected schema
- Enable debug logging to see detailed tool execution information
Debug Steps
- Enable debug logging: Set
logLevel: "debug"to see detailed connection and execution logs - Check server stderr: For stdio MCP servers, use
stderrredirection to capture server error output - Test explicit transports: Try forcing specific transport types to isolate auto-detection issues
- Verify server independently: Test the MCP server with other clients (e.g., MCP Inspector)
Configuration Validation
The library validates server configurations and will throw McpInitializationError for invalid configurations:
- Cannot specify both
urlandcommand: Usecommandfor local servers orurlfor remote servers - Transport type must match URL protocol: e.g.,
transport: "http"requireshttp:orhttps:URL - Transport requires appropriate configuration: HTTP/WS transports need URLs, stdio transport needs command
LLM Compatibility
The library automatically handles schema compatibility for different LLM providers:
- Google Gemini: Sanitizes schemas to remove unsupported properties (logs warnings when changes are made)
- OpenAI Structured Outputs: Makes optional fields nullable as required by OpenAI’s specification
- Anthropic Claude: Works with schemas as-is
- Other providers: Generally compatible with standard JSON schemas
Schema transformations are applied automatically and logged at the warn level when changes are made, helping you identify which MCP servers might need upstream schema fixes for optimal compatibility.
Resource Management
The returned cleanup function properly handles resource cleanup:
- Closes all MCP server connections concurrently
- Logs any cleanup failures without throwing errors
- Continues cleanup of remaining servers even if some fail
- Should always be called when done using the tools
const { tools, cleanup } = await convertMcpToLangchainTools(mcpServers);
try {
// Use tools with your LLM
} finally {
// Always cleanup, even if errors occur
await cleanup();
}
Debugging and Logging
The library provides configurable logging to help debug connection and tool execution issues:
// Configure log level
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers,
{ logLevel: "debug" }
);
// Use custom logger
class MyLogger implements McpToolsLogger {
debug(...args: unknown[]) { console.log("[DEBUG]", ...args); }
info(...args: unknown[]) { console.log("[INFO]", ...args); }
warn(...args: unknown[]) { console.warn("[WARN]", ...args); }
error(...args: unknown[]) { console.error("[ERROR]", ...args); }
}
const { tools, cleanup } = await convertMcpToLangchainTools(
mcpServers,
{ logger: new MyLogger() }
);
Available log levels: "fatal" | "error" | "warn" | "info" | "debug" | "trace"
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










