Connect Model Context Protocol (MCP) servers to LangChain.js and LangGraph.js agents. Supports multiple transports (stdio, HTTP, SSE), multi-server management, and rich content responses.
Integrate Model Context Protocol (MCP) tools into your LangChain.js or LangGraph.js agents using the `@langchain/mcp-adapters` library. Connect to local or remote MCP servers via stdio or Streamable HTTP/SSE, manage multiple servers simultaneously, and leverage rich multimodal tool outputs.
This skill guides you through:
1. Installing and configuring `@langchain/mcp-adapters`
2. Connecting to one or more MCP servers (stdio, HTTP, SSE)
3. Loading MCP tools into LangChain/LangGraph agents
4. Handling authentication, reconnection, and error strategies
5. Working with multimodal content (text, images, resources)
6. Subscribing to server notifications and progress events
7. Customizing tool behavior with hooks
Install the MCP adapters package and any required LangChain/LangGraph dependencies:
```bash
npm install @langchain/mcp-adapters @langchain/core @langchain/openai
```
Export your AI provider API key (example for OpenAI):
```bash
export OPENAI_API_KEY=your_api_key_here
```
**Option A: MultiServerMCPClient (Recommended)**
Use `MultiServerMCPClient` to connect to multiple MCP servers without managing client instances yourself.
**Option B: Manual MCP Client Management**
Instantiate and manage your own `@modelcontextprotocol/sdk` client for fine-grained control.
Define your MCP servers with appropriate transport types:
**Stdio Transport (Local Servers):**
```typescript
{
serverName: {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
restart: {
enabled: true,
maxAttempts: 3,
delayMs: 1000,
},
}
}
```
**Streamable HTTP Transport (Remote Servers):**
```typescript
{
weather: {
url: "https://example.com/weather/mcp",
headers: {
Authorization: "Bearer token123",
},
automaticSSEFallback: true, // default
}
}
```
**SSE Transport (Force SSE for Legacy Servers):**
```typescript
{
github: {
transport: "sse",
url: "https://example.com/mcp",
reconnect: {
enabled: true,
maxAttempts: 5,
delayMs: 2000,
},
}
}
```
**OAuth 2.0 Authentication:**
```typescript
{
"oauth-protected-server": {
url: "https://protected.example.com/mcp",
authProvider: new MyOAuthProvider({
redirectUrl: "https://myapp.com/oauth/callback",
clientMetadata: {
redirect_uris: ["https://myapp.com/oauth/callback"],
client_name: "My MCP Client",
scope: "mcp:read mcp:write"
}
}),
}
}
```
**Using MultiServerMCPClient:**
```typescript
import { createAgent } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
import { MultiServerMCPClient } from "@langchain/mcp-adapters";
const client = new MultiServerMCPClient({
throwOnLoadError: true,
prefixToolNameWithServerName: false,
useStandardContentBlocks: true,
onConnectionError: "ignore", // or "throw"
mcpServers: {
math: {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
},
filesystem: {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem"],
},
},
});
const tools = await client.getTools();
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const agent = createAgent({
llm: model,
tools,
});
const response = await agent.invoke({
messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
});
console.log(response);
await client.close();
```
**Using Manual Client Management:**
```typescript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { loadMcpTools } from "@langchain/mcp-adapters";
import { createAgent } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
const transport = new StdioClientTransport({
command: "npx",
args: ["-y", "@modelcontextprotocol/server-math"],
});
const client = new Client({
name: "math-client",
version: "1.0.0",
});
await client.connect(transport);
const tools = await loadMcpTools("math", client, {
throwOnLoadError: true,
useStandardContentBlocks: true,
});
const model = new ChatOpenAI({ model: "gpt-4" });
const agent = createAgent({ llm: model, tools });
const response = await agent.invoke({
messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
});
console.log(response);
await client.close();
```
Subscribe to server events:
```typescript
const client = new MultiServerMCPClient({
mcpServers: { /* ... */ },
onMessage: (log, source) => {
console.log(`[${source.server}] ${log.data}`);
},
onProgress: (progress, source) => {
const pct = progress.percentage ??
(progress.progress != null && progress.total
? Math.round((progress.progress / progress.total) * 100)
: undefined);
if (pct != null) {
const origin = source.type === "tool"
? `${source.server}/${source.name}`
: "unknown";
console.log(`[progress:${origin}] ${pct}%`);
}
},
onToolsListChanged: (evt, source) => {
console.log(`[${source.server}] tools changed`);
},
});
```
Modify arguments before execution and results after:
```typescript
const client = new MultiServerMCPClient({
mcpServers: { /* ... */ },
beforeToolCall: ({ serverName, name, args }) => {
const nextArgs = {
...(args as Record<string, unknown>),
injected: true
};
return {
args: nextArgs,
headers: { "X-Request-ID": crypto.randomUUID() },
};
},
afterToolCall: (res) => {
if (res.name === "someTool") {
return { result: ["modified-output", []] };
}
return { result: res.result };
},
});
```
Key configuration options:
| Option | Default | Description |
|--------|---------|-------------|
| `throwOnLoadError` | `true` | Throw if tool fails to load |
| `prefixToolNameWithServerName` | `false` | Prefix tools with server name |
| `useStandardContentBlocks` | `false` | Use standardized content format (recommended `true` for new apps) |
| `onConnectionError` | `"throw"` | Behavior on connection failure: `"throw"` or `"ignore"` |
| `defaultToolTimeout` | `0` | Timeout in ms for tool calls |
```typescript
try {
const response = await agent.invoke({
messages: [{ role: "user", content: "your query" }],
});
console.log(response);
} catch (error) {
console.error("Error during agent execution:", error);
if (error.name === "ToolException") {
console.error("Tool execution failed:", error.message);
}
} finally {
await client.close();
}
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-mcp-adapters-integration/raw