Build LLM-powered applications with LangChain.js framework. Chain together components, integrate third-party services, and create AI applications with model interoperability, embeddings, vector stores, and real-time data augmentation.
Build LLM-powered applications using the LangChain.js framework. This skill helps you integrate LangChain into your projects, enabling you to chain together interoperable components, connect to multiple LLM providers, manage embeddings and vector stores, and build sophisticated AI applications.
This skill guides you through setting up and using LangChain.js to:
When a user requests LangChain.js integration, follow these steps:
1. **Verify Environment Compatibility**
- Check if the project uses a supported runtime:
- Node.js (20.x, 22.x, 24.x) with ESM or CommonJS
- Cloudflare Workers, Vercel/Next.js, Supabase Edge Functions
- Browser, Deno, or Bun
- For unsupported environments, inform the user of LangChain.js requirements
2. **Install LangChain.js**
- Detect the package manager in use (check for `package-lock.json`, `pnpm-lock.yaml`, `yarn.lock`, or `bun.lockb`)
- Install using the appropriate command:
```bash
npm install -S langchain
# or
pnpm install langchain
# or
yarn add langchain
# or
bun add langchain
```
3. **Set Up Basic Configuration**
- Create or update environment variables for API keys (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
- If the user hasn't specified a provider, ask which LLM provider they plan to use
- Install provider-specific packages if needed (e.g., `@langchain/openai`, `@langchain/anthropic`)
4. **Create Initial Implementation**
- Based on the user's requirements, implement one of these common patterns:
- **Simple LLM call**: Basic chat model integration
- **Chain**: Sequential operations with multiple components
- **RAG (Retrieval-Augmented Generation)**: Vector store + retriever + LLM
- **Agent**: Autonomous decision-making with tools
- Follow LangChain.js best practices for the chosen pattern
5. **Add Documentation**
- Add inline comments explaining the LangChain components used
- If creating new files, include brief header comments about the LangChain pattern implemented
- Reference official docs: https://docs.langchain.com/oss/javascript/langchain/overview
```typescript
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
modelName: "gpt-4",
temperature: 0.7,
});
const response = await model.invoke("Hello, world!");
```
```typescript
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const prompt = PromptTemplate.fromTemplate("Tell me a joke about {topic}");
const model = new ChatOpenAI();
const chain = prompt.pipe(model);
const result = await chain.invoke({ topic: "programming" });
```
```typescript
import { ChatOpenAI } from "@langchain/openai";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { createRetrievalChain } from "langchain/chains/retrieval";
const embeddings = new OpenAIEmbeddings();
const vectorStore = await MemoryVectorStore.fromTexts(
["LangChain helps build LLM apps", "Vector stores enable semantic search"],
[{}, {}],
embeddings
);
const retriever = vectorStore.asRetriever();
const model = new ChatOpenAI();
// Use retriever with model for RAG
```
When users need more sophisticated capabilities:
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchainjs-integration/raw