Use deprecated LangChain v0.x chains like LLMChain, ConversationalRetrievalQAChain, and indexing APIs for backward compatibility with existing applications
Use deprecated LangChain v0.x functionality including legacy chains (LLMChain, ConversationalRetrievalQAChain, RetrievalQAChain), indexing APIs, and community integrations for backward compatibility with existing applications.
Use this skill when working with:
**Important:** For new projects, use LangChain v1.0 with the `createAgent` API instead. This package is in maintenance mode and receives only bug fixes and security updates.
When helping users work with @langchain/classic:
1. **Install the package**:
```bash
npm install @langchain/classic @langchain/core
```
2. **Identify the use case**:
- Ask if this is for maintaining existing code or new development
- For new development, strongly recommend LangChain v1.0 instead
- Explain that @langchain/classic is in maintenance mode
When implementing or fixing legacy chain code:
1. **LLMChain** - Basic LLM calls with prompt templates:
```typescript
import { LLMChain } from "@langchain/classic/chains";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({ model: "gpt-4" });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
const result = await chain.call({ product: "colorful socks" });
```
2. **ConversationalRetrievalQAChain** - QA over documents with chat history:
```typescript
import { ConversationalRetrievalQAChain } from "@langchain/classic/chains";
import { ChatOpenAI } from "@langchain/openai";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const vectorStore = await MemoryVectorStore.fromTexts(
["Document 1 text...", "Document 2 text..."],
[{ id: 1 }, { id: 2 }],
new OpenAIEmbeddings()
);
const model = new ChatOpenAI({ model: "gpt-4" });
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever()
);
const result = await chain.call({
question: "What is in the documents?",
chat_history: [],
});
```
3. **Other legacy chains**:
- RetrievalQAChain - QA without conversation memory
- StuffDocumentsChain - Stuffing documents into prompts
- MapReduceDocumentsChain - Map-reduce over documents
- RefineDocumentsChain - Iterative refinement over documents
When users ask about migrating or you identify migration opportunities:
1. **From langchain v0.x to @langchain/classic**:
```typescript
// Before (v0.x)
import { LLMChain } from "langchain/chains";
// After (v1.0 + classic)
import { LLMChain } from "@langchain/classic/chains";
```
2. **From @langchain/classic to langchain v1.0** (recommended for new code):
```typescript
// Before (using LLMChain)
import { LLMChain } from "@langchain/classic/chains";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({ model: "gpt-4" });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
const result = await chain.call({ product: "colorful socks" });
// After (using createAgent)
import { createAgent } from "langchain";
const agent = createAgent({
model: "openai:gpt-4",
systemPrompt: "You are a creative assistant that helps name companies.",
});
const result = await agent.invoke({
messages: [
{
role: "user",
content: "What is a good name for a company that makes colorful socks?",
},
],
});
```
3. **Always explain the trade-offs**:
- @langchain/classic: Backward compatible but maintenance mode only
- langchain v1.0: Better performance, cleaner API, active development
When working with document indexing:
1. Use RecordManager for managing document updates in vector stores
2. Import from @langchain/classic when maintaining existing indexing code
3. Recommend modern alternatives for new implementations
When troubleshooting @langchain/classic issues:
1. **Check import paths**:
- Ensure imports are from "@langchain/classic" not "langchain"
- Verify peer dependency @langchain/core is installed
2. **Version compatibility**:
- Confirm @langchain/classic version is compatible with @langchain/core
- Check for breaking changes in release notes
3. **Common issues**:
- Missing peer dependencies
- Incorrect import paths after migration
- Mixing v0.x and v1.0 APIs
Direct users to:
1. **Always recommend v1.0 for new projects** - Make it clear that @langchain/classic is for backward compatibility only
2. **Maintenance mode awareness** - Inform users that new features won't be added to this package
3. **Security first** - Prioritize security updates even in maintenance mode
4. **Migration path** - Always provide a path forward to modern APIs when appropriate
5. **No feature requests** - Explain that feature requests should target langchain v1.0 instead
User: "My LLMChain import is broken after upgrading to LangChain v1.0"
Response:
1. Identify that they're using legacy chain
2. Install @langchain/classic
3. Update import from "langchain/chains" to "@langchain/classic/chains"
4. Test the fix
5. Suggest migration path to createAgent for future work
User: "Help me add a Q&A feature over my documents with chat history"
Response:
1. Ask if maintaining existing code or new development
2. If new: Recommend langchain v1.0 createAgent with retrieval tool
3. If existing: Show ConversationalRetrievalQAChain from @langchain/classic
4. Explain trade-offs and migration benefits
User: "Getting error 'Cannot find module @langchain/classic'"
Response:
1. Verify @langchain/classic is installed
2. Check peer dependency @langchain/core is installed
3. Verify import path syntax
4. Test with simple example
5. Check node_modules and package.json
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-classic-legacy-chains/raw