MCP ExplorerExplorer

LangChain-TypeScript Client

@hideyaon 4 days ago
11 MIT
FreeCommunity
AI Systems
#langchain#langchain-typescript#mcp#mcp-client#modelcontextprotocol#nodejs#tool-call#tool-calling#typescript
Simple CLI MCP Client Implementation Using LangChain ReAct Agent / TypeScript

Overview

What is LangChain-TypeScript Client

This project is an MCP client built using LangChain and TypeScript, designed to facilitate interactions with Model Context Protocol (MCP) servers through an easy-to-use interface. It specifically utilizes the convertMcpToLangchainTools() function from the @h1deya/langchain-mcp-tools package, which enables the parallel initialization of various MCP servers and seamlessly converts their available tools into LangChain-compatible formats.

Use cases

The MCP client can be used for various applications, including chatbot integration, automation of tasks through model invocation, and building intelligent applications that require access to multiple language models. It supports LLMs from providers like Anthropic, OpenAI, and Groq, allowing developers to harness the capabilities of these models depending on their needs.

How to use

To use the client, users need to install dependencies, set up API keys in a .env file, and configure MCP server settings in a llm_mcp_config.json5 file. Once configured, the application can be run using npm start, allowing interactive input to explore example queries for invoking MCP server tools. Optionally, verbose mode provides additional insights during execution.

Key features

Key features of this MCP client include support for multiple LLMs from different providers, convenience for interacting with MCP servers using LangChain’s structured tool format, and easy configuration management via JSON5. The client also supports environment variable substitution for sensitive information, enhancing security and ease of setup.

Where to use

This MCP client can be used in applications requiring natural language processing, automated customer support systems, virtual assistants, or any software needing to leverage advanced AI capabilities for tasks such as content generation, summarization, or conversation handling. It is especially useful in contexts where multiple LLMs need to be accessed or compared.

Content

Simple CLI MCP Client Using LangChain / TypeScript License: MIT

This simple Model Context Protocol (MCP)
client with command line interface demonstrates the use of MCP server tools by the LangChain ReAct Agent.

When testing LLM and MCP servers, their settings can be conveniently configured via a configuration file, such as the following:

{
    "llm": {
        "model_provider": "openai",
        "model": "gpt-4o-mini",
    },

    "mcp_servers": {
        "fetch": {
            "command": "uvx",
            "args": [
                "mcp-server-fetch"
            ]
        },

        "weather": {
            "command": "npx",
            "args": [
                "-y",
                "@h1deya/mcp-server-weather"
            ]
        },

        // Auto-detection: tries Streamable HTTP first, falls back to SSE
        "remote-mcp-server": {
            "url": "https://${SERVER_HOST}:${SERVER_PORT}/..."
        },

        // Example of authentication via Authorization header
        "github": {
            "type": "http",  // recommended to specify the protocol explicitly when authentication is used
            "url": "https://api.githubcopilot.com/mcp/",
            "headers": {
                "Authorization": "Bearer ${GITHUB_PERSONAL_ACCESS_TOKEN}"
            }
        },
    }
}

It leverages a utility function convertMcpToLangchainTools() from
@h1deya/langchain-mcp-tools.
This function handles parallel initialization of specified multiple MCP servers
and converts their available tools into an array of LangChain-compatible tools
(StructuredTool[]).

This client supports both local (stdio) MCP servers as well as
remote (Streamable HTTP/SSE/WebSocket) MCP servers that are accessible via a simple URL.
This client only supports text results of tool calls.

For the convenience of debugging MCP servers, this client prints local (stdio) MCP server logs to the console.

LLMs from Anthropic, OpenAI and Google (GenAI) are currently supported.

A python version of this MCP client is available
here

Prerequisites

as needed.

Setup

  1. Install dependencies:

    npm install
    
  2. Setup API keys:

    cp .env.template .env
    
    • Update .env as needed.
    • .gitignore is configured to ignore .env
      to prevent accidental commits of the credentials.
  3. Configure LLM and MCP Servers settings llm_mcp_config.json5 as needed.

    • The configuration file format
      for MCP servers follows the same structure as
      Claude for Desktop,
      with one difference: the key name mcpServers has been changed
      to mcp_servers to follow the snake_case convention
      commonly used in JSON configuration files.
    • The file format is JSON5,
      where comments and trailing commas are allowed.
    • The format is further extended to replace ${...} notations
      with the values of corresponding environment variables.
    • Keep all the credentials and private info in the .env file
      and refer to them with ${...} notation as needed.

Usage

Run the app:

npm start

Run in verbose mode:

npm run start:v

See commandline options:

npm run start:h

At the prompt, you can simply press Enter to use example queries that perform MCP server tool invocations.

Example queries can be configured in llm_mcp_config.json5

Tools

No tools

Comments