Comprehensive guide for using OpenAI models, tools, and integrations with LangChain.js, including chat models, embeddings, web search, code interpreter, file search, image generation, and computer use.
You are an expert in using the `@langchain/openai` package to build AI applications with OpenAI models and tools through LangChain.js.
Always install both `@langchain/openai` and `@langchain/core`:
```bash
npm install @langchain/openai @langchain/core
```
Ensure all packages use the same `@langchain/core` version by adding resolution fields to `package.json`:
```json
{
"resolutions": {
"@langchain/core": "^0.3.0"
},
"overrides": {
"@langchain/core": "^0.3.0"
},
"pnpm": {
"overrides": {
"@langchain/core": "^0.3.0"
}
}
}
```
Set the OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY=your-api-key
```
```typescript
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4-1106-preview",
});
const response = await model.invoke(new HumanMessage("Hello world!"));
```
```typescript
const response = await model.stream(new HumanMessage("Hello world!"));
```
The package provides LangChain-compatible wrappers for OpenAI's built-in tools.
Enable models to search the web for up-to-date information.
**Basic usage:**
```typescript
import { ChatOpenAI, tools } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o" });
const response = await model.invoke(
"What was a positive news story from today?",
{ tools: [tools.webSearch()] }
);
```
**Domain filtering (up to 100 domains):**
```typescript
const response = await model.invoke("Latest AI research news", {
tools: [tools.webSearch({
filters: {
allowedDomains: ["arxiv.org", "nature.com", "science.org"],
},
})],
});
```
**User location for geo-refined results:**
```typescript
const response = await model.invoke("Best restaurants near me?", {
tools: [tools.webSearch({
userLocation: {
type: "approximate",
country: "US",
city: "San Francisco",
region: "California",
timezone: "America/Los_Angeles",
},
})],
});
```
**Cache-only mode (disable live internet access):**
```typescript
const response = await model.invoke("Find information about OpenAI", {
tools: [tools.webSearch({ externalWebAccess: false })],
});
```
Connect to remote MCP servers or OpenAI-maintained service connectors.
**Remote MCP server:**
```typescript
const response = await model.invoke("Roll 2d4+1", {
tools: [tools.mcp({
serverLabel: "dmcp",
serverDescription: "A D&D MCP server for dice rolling",
serverUrl: "https://dmcp-server.deno.dev/sse",
requireApproval: "never",
})],
});
```
**Service connectors (Google Calendar, Dropbox, etc.):**
```typescript
const response = await model.invoke("What's on my calendar today?", {
tools: [tools.mcp({
serverLabel: "google_calendar",
connectorId: "connector_googlecalendar",
authorization: "<oauth-access-token>",
requireApproval: "never",
})],
});
```
Run Python code in a sandboxed environment for data analysis, file generation, and iterative problem-solving.
**Basic usage (default 1GB memory):**
```typescript
const response = await model.invoke("Solve the equation 3x + 11 = 14", {
tools: [tools.codeInterpreter()],
});
```
**Custom memory (1GB, 4GB, 16GB, or 64GB):**
```typescript
const response = await model.invoke(
"Analyze this large dataset and create visualizations",
{ tools: [tools.codeInterpreter({ container: { memoryLimit: "4g" } })] }
);
```
**With uploaded files:**
```typescript
const response = await model.invoke("Process the uploaded CSV file", {
tools: [tools.codeInterpreter({
container: {
memoryLimit: "4g",
fileIds: ["file-abc123", "file-def456"],
},
})],
});
```
**Use existing container:**
```typescript
const response = await model.invoke("Continue working with the data", {
tools: [tools.codeInterpreter({ container: "cntr_abc123" })],
});
```
**Important:** Containers expire after 20 minutes of inactivity. In prompts, refer to this as "the python tool".
Search uploaded files using semantic and keyword search.
**Prerequisites:** Upload files with `purpose: "assistants"`, create a vector store, and add files to it.
```typescript
const response = await model.invoke("What is deep research by OpenAI?", {
tools: [tools.fileSearch({
vectorStoreIds: ["vs_abc123"],
maxNumResults: 5,
filters: { type: "eq", key: "category", value: "blog" },
rankingOptions: { scoreThreshold: 0.8, ranker: "auto" },
})],
});
```
**Compound filters (AND/OR):**
```typescript
filters: {
type: "and",
filters: [
{ type: "eq", key: "category", value: "technical" },
{ type: "gte", key: "year", value: 2024 },
],
}
```
**Filter operators:** `eq`, `ne`, `gt`, `gte`, `lt`, `lte`
Generate or edit images from text prompts.
**Basic usage:**
```typescript
const response = await model.invoke(
"Generate an image of a gray tabby cat hugging an otter with an orange scarf",
{ tools: [tools.imageGeneration()] }
);
// Save generated image
const imageOutput = response.additional_kwargs.tool_outputs?.find(
(output) => output.type === "image_generation_call"
);
if (imageOutput?.result) {
const fs = await import("fs");
fs.writeFileSync("output.png", Buffer.from(imageOutput.result, "base64"));
}
```
**Custom size and quality:**
```typescript
const response = await model.invoke("Draw a beautiful sunset over mountains", {
tools: [tools.imageGeneration({
size: "1536x1024", // "1024x1024", "1024x1536", "auto"
quality: "high", // "low", "medium", "auto"
})],
});
```
**Output format and compression:**
```typescript
tools: [tools.imageGeneration({
outputFormat: "jpeg", // "png", "webp"
outputCompression: 90, // 0-100 (for JPEG/WebP)
})]
```
**Transparent background:**
```typescript
tools: [tools.imageGeneration({
background: "transparent", // "opaque", "auto"
outputFormat: "png",
})]
```
**Streaming with partial images:**
```typescript
tools: [tools.imageGeneration({
partialImages: 2, // 0-3
})]
```
**Force image generation:**
```typescript
const response = await model.invoke("A serene lake at dawn", {
tools: [tools.imageGeneration()],
tool_choice: { type: "image_generation" },
});
```
**Multi-turn editing:**
```typescript
const response1 = await model.invoke("Draw a red car", {
tools: [tools.imageGeneration()],
});
const response2 = await model.invoke(
[response1, new HumanMessage("Now change the car color to blue")],
{ tools: [tools.imageGeneration()] }
);
```
**Prompting tips:** Use terms like "draw" or "edit". For combining, say "edit the first image by adding this element" instead of "combine" or "merge".
**Supported models:** `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`
Control computer interfaces by simulating mouse clicks, keyboard input, and scrolling.
**⚠️ Security Warning:** Use only in sandboxed environments. Not for high-stakes or authenticated tasks. Always implement human-in-the-loop for important decisions.
**How it works:** Continuous loop: model sends actions → execute actions → capture screenshot → send back → repeat.
```typescript
import { ChatOpenAI, tools } from "@langchain/openai";
const model = new ChatOpenAI({ model: "computer-use-preview" });
const computer = tools.computerUse({
displayWidth: 1024,
displayHeight: 768,
environment: "browser",
execute: async (action) => {
if (action.type === "screenshot") {
return captureScreenshot();
}
if (action.type === "click") {
await page.mouse.click(action.x, action.y, { button: action.button });
return captureScreenshot();
}
if (action.type === "type") {
await page.keyboard.type(action.text);
return captureScreenshot();
}
if (action.type === "scroll") {
await page.mouse.move(action.x, action.y);
await page.evaluate(`window.scrollBy(${action.scroll_x}, ${action.scroll_y})`);
return captureScreenshot();
}
return captureScreenshot();
},
});
const llmWithComputer = model.bindTools([computer]);
const response = await llmWithComputer.invoke("Check the latest news on bing.com");
```
Run shell commands locally (designed for Codex CLI with `codex-mini-latest`).
**⚠️ Security Warning:** Running arbitrary shell commands is dangerous. Always sandbox execution or use strict allow/deny-lists.
```typescript
import { ChatOpenAI, tools } from "@langchain/openai";
import { exec } from "child_process";
import { promisify } from "util";
const execAsync = promisify(exec);
const model = new ChatOpenAI({ model: "codex-mini-latest" });
const shell = tools.localShell({
execute: async (command) => {
try {
const { stdout, stderr } = await execAsync(command);
return { stdout, stderr, exitCode: 0 };
} catch (error) {
return { stdout: "", stderr: error.message, exitCode: error.code };
}
},
});
const llmWithShell = model.bindTools([shell]);
const response = await llmWithShell.invoke("List all files in the current directory");
```
1. **Version consistency:** Always ensure `@langchain/core` versions match across packages using resolution fields
2. **API keys:** Store API keys in environment variables, never in code
3. **Tool selection:** Choose tools based on task requirements (web search for current info, code interpreter for calculations, file search for knowledge bases)
4. **Security:** Sandbox computer use and shell tools; implement human-in-the-loop for critical operations
5. **Error handling:** Always handle API errors and rate limits gracefully
6. **Streaming:** Use streaming for long-running operations to provide user feedback
7. **Resource limits:** Configure appropriate memory limits for code interpreter based on task complexity
8. **File management:** Clean up temporary files and expired containers to manage costs
When implementing LangChain OpenAI integrations, focus on:
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-openai-integration/raw