Add LangSmith observability and tracing to LLM applications. Trace OpenAI, Anthropic, LangChain, and custom AI workflows with automatic logging, debugging, and evaluation capabilities.
Integrate LangSmith observability and tracing into LLM applications to log, debug, and evaluate AI workflows.
This skill helps you add comprehensive tracing and observability to AI applications using LangSmith. It supports:
Use this skill when you need to:
When the user requests LangSmith tracing integration, follow these steps:
Determine what the user's application uses and install the appropriate packages:
```bash
npm install langsmith
npm install langchain
npm install openai
npm install @anthropic-ai/sdk
```
Add LangSmith configuration to `.env` or `.env.local`:
```bash
LANGSMITH_TRACING=true
LANGSMITH_API_KEY=<user-api-key>
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
```
**Important**: Instruct the user to:
1. Sign up at https://smith.langchain.com
2. Generate an API key from https://smith.langchain.com/settings
3. Replace `<user-api-key>` with their actual key
Based on the user's codebase, select the appropriate integration approach:
#### A. OpenAI SDK (Recommended for OpenAI Users)
Use the `wrapOpenAI` wrapper for automatic tracing:
```typescript
import { OpenAI } from "openai";
import { wrapOpenAI } from "langsmith/wrappers";
const openai = wrapOpenAI(new OpenAI());
// All calls are now automatically traced
await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ content: "Hi there!", role: "user" }],
});
```
For selective tracing, use `traceable`:
```typescript
import { traceable } from "langsmith/traceable";
import { OpenAI } from "openai";
const openai = new OpenAI();
const createCompletion = traceable(
openai.chat.completions.create.bind(openai.chat.completions),
{ name: "OpenAI Chat Completion", run_type: "llm" }
);
await createCompletion({
model: "gpt-3.5-turbo",
messages: [{ content: "Hi there!", role: "user" }],
});
```
#### B. Anthropic SDK
Use `wrapSDK` for Anthropic API calls:
```typescript
import { wrapSDK } from "langsmith/wrappers";
import { Anthropic } from "@anthropic-ai/sdk";
const originalSDK = new Anthropic();
const anthropic = wrapSDK(originalSDK);
await anthropic.messages.create({
messages: [{ role: "user", content: "What is 1 + 1?" }],
model: "claude-3-sonnet-20240229",
max_tokens: 1024,
});
```
#### C. LangChain Integration
LangChain automatically traces when environment variables are set:
```typescript
import { ChatOpenAI } from "langchain/chat_models/openai";
// No wrapper needed - tracing is automatic with env vars
const chat = new ChatOpenAI({ temperature: 0 });
const response = await chat.predict("Translate to French: I love programming.");
```
#### D. Next.js / Vercel AI SDK
Wrap route handlers to group traces:
```typescript
import { NextRequest, NextResponse } from "next/server";
import { OpenAI } from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";
export const runtime = "edge";
const handler = traceable(
async function () {
const openai = wrapOpenAI(new OpenAI());
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ content: "Why is the sky blue?", role: "user" }],
});
return { text: completion.choices[0].message.content };
},
{ name: "Next.js Handler" }
);
export async function POST(req: NextRequest) {
const result = await handler();
return NextResponse.json(result);
}
```
#### E. Custom Framework (Manual Instrumentation)
Use `RunTree` for fine-grained control:
```typescript
import { RunTree } from "langsmith";
const parentRun = new RunTree({
name: "My Chat Bot",
run_type: "chain",
inputs: { text: "Summarize this morning's meetings." },
});
await parentRun.postRun();
const childRun = await parentRun.createChild({
name: "My Proprietary LLM",
run_type: "llm",
inputs: { prompts: ["You are an AI Assistant. Summarize meetings."] },
});
await childRun.postRun();
await childRun.end({
outputs: { generations: ["I should fetch meeting transcripts..."] },
});
await childRun.patchRun();
await parentRun.end({
outputs: { output: ["The meeting notes are as follows:..."] },
});
await parentRun.patchRun();
```
After implementation:
1. Run the application
2. Confirm traces appear at https://smith.langchain.com
3. Check that inputs, outputs, and token usage are captured correctly
For complex workflows, nest `traceable` calls:
```typescript
import { traceable } from "langsmith/traceable";
const step1 = traceable(async (input: string) => {
// ... LLM call or logic
return result;
}, { name: "Step 1: Extract Info" });
const step2 = traceable(async (data: any) => {
// ... another LLM call
return response;
}, { name: "Step 2: Generate Response" });
const workflow = traceable(async (userInput: string) => {
const extracted = await step1(userInput);
const response = await step2(extracted);
return response;
}, { name: "Full Workflow" });
// This will create a hierarchical trace
await workflow("User's question here");
```
Errors are automatically captured. For manual tracing, use `try/catch`:
```typescript
const childRun = await parentRun.createChild({
name: "Unreliable Component",
run_type: "tool",
inputs: { input: "..." },
});
await childRun.postRun();
try {
// ... component logic
throw new Error("Something went wrong");
} catch (e) {
await childRun.end({ error: `Error: ${e.message}` });
await childRun.patchRun();
throw e;
}
```
```typescript
traceable(fn, {
name: "Function Name", // Display name in LangSmith
run_type: "llm" | "chain" | "tool",
project_name: "My Project", // Override default project
tags: ["production", "v2"], // Add searchable tags
})
```
1. **Debug a failing LangChain agent**: Wrap the agent executor to see each tool call and reasoning step.
2. **Track OpenAI costs**: Use `wrapOpenAI` to automatically log token usage across all completions.
3. **A/B test prompts**: Tag runs with different prompt versions and compare performance in LangSmith UI.
4. **Create evaluation datasets**: Convert production runs into test cases for regression testing.
5. **Monitor multi-step RAG pipeline**: Trace retrieval, reranking, and generation steps separately.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langsmith-tracing/raw