Set up automatic OpenTelemetry tracing for LangChain.js applications to monitor and observe LangChain operations, chains, and agents in production.
Set up automatic OpenTelemetry tracing and instrumentation for LangChain.js applications to monitor chains, agents, and LLM calls in production.
This skill helps you instrument your LangChain.js application with OpenTelemetry to automatically trace and monitor:
When the user requests LangChain OpenTelemetry instrumentation, follow these steps:
First, check if this is a Node.js/TypeScript project and if LangChain is already installed:
Install the LangChain instrumentation package:
```bash
npm install --save @traceloop/instrumentation-langchain
```
For a complete setup, also install OpenTelemetry core dependencies if not already present:
```bash
npm install --save @opentelemetry/sdk-trace-node @opentelemetry/instrumentation @opentelemetry/api
```
**Note:** For simpler setup with all instrumentations bundled, suggest installing `@traceloop/node-server-sdk` instead.
Create or update the OpenTelemetry instrumentation file. Common locations:
**Basic setup:**
```typescript
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { LangChainInstrumentation } from "@traceloop/instrumentation-langchain";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
const provider = new NodeTracerProvider();
provider.register();
registerInstrumentations({
instrumentations: [new LangChainInstrumentation()],
});
```
Add an exporter to send traces to an observability backend:
```typescript
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
const exporter = new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4318/v1/traces",
});
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
```
Ensure the instrumentation is loaded **before** any LangChain imports:
**In your main application file (e.g., `src/index.ts`, `app.ts`):**
```typescript
// MUST be first import
import "./instrumentation";
// Now import LangChain and other dependencies
import { ChatOpenAI } from "@langchain/openai";
// ... rest of your application
```
Check that the installed LangChain version is compatible:
Suggest adding OpenTelemetry configuration to `.env`:
```
OTEL_SERVICE_NAME=my-langchain-app
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318/v1/traces
OTEL_LOG_LEVEL=info
```
Provide a simple test script to verify instrumentation is working:
```typescript
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
async function test() {
const model = new ChatOpenAI();
const response = await model.invoke([
new HumanMessage("Hello, this is a test!"),
]);
console.log("Response:", response);
console.log("Check your trace backend for spans");
}
test();
```
After completion, the project should have:
If configuration details are unclear:
1. "Do you already have an OpenTelemetry backend configured, or would you like to set up a local Jaeger instance for testing?"
2. "Where is your application's main entry point file?"
3. "Would you prefer the complete `@traceloop/node-server-sdk` bundle or just the LangChain instrumentation?"
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-opentelemetry-instrumentation/raw