Expert guidance for LangChain 1.0+ and LangGraph development with migration patterns, agent creation, and modern API usage
Expert guidance for working with LangChain 1.0+ and LangGraph to build AI applications, agents, and RAG systems using current APIs and best practices.
**All code must follow LangChain >= 1.0.0 documentation and patterns.** The v1.0 release introduced breaking changes. Always reference the official documentation at https://docs.langchain.com/ for current patterns.
Before writing any LangChain code, verify the development environment:
When reviewing existing code or user requests, watch for these deprecated patterns and migrate them:
| Deprecated (Pre-v1) | Current (v1.0+) | Migration Action |
|---------------------|-----------------|------------------|
| `create_react_agent()` | `create_agent()` | Replace function call and update imports |
| `create_tool_calling_agent()` | `create_agent()` | Replace function call and update imports |
| `AgentExecutor` | `create_agent()` returns graph | Remove executor wrapper, use graph directly |
| `langchain.chat_models.ChatOpenAI` | `langchain_openai.ChatOpenAI` | Update import to provider-specific package |
| `langgraph.prebuilt.create_react_agent` | `langchain.agents.create_agent` | Update import path |
| `langchain_community.tools.tavily_search.TavilySearchResults` | `langchain_tavily.TavilySearch` | Update import and class name |
| `langchain.hub` | `langchain_classic.hub` | Only use for legacy prompts, avoid if possible |
| `ConversationBufferMemory` | LangGraph checkpointing | Implement stateful graphs with checkpointing |
| `LLMChain` | LCEL (`prompt \| model \| parser`) | Rewrite as LCEL chain |
| `SequentialChain` | LCEL with `RunnableSequence` | Rewrite as LCEL pipeline |
When creating agents, use the modern `create_agent()` pattern:
```python
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
@tool
def my_tool(param: str) -> str:
"""Tool docstring becomes the tool description."""
return result
agent = create_agent(
model="openai:gpt-4o-mini", # String format for model specification
tools=[my_tool],
system_prompt="You are a helpful assistant...",
)
result = agent.invoke({
"messages": [{"role": "user", "content": "your query here"}]
})
```
**Key differences from pre-v1:**
Define tools using the `@tool` decorator with proper type hints:
```python
from langchain_core.tools import tool
from typing import Annotated
@tool
def search_tool(query: Annotated[str, "The search query"]) -> str:
"""Search the web for information.
The docstring becomes the tool description shown to the LLM.
"""
# Implementation
return results
```
**Important:**
Implement streaming for real-time agent responses:
```python
for token, metadata in agent.stream(
{"messages": [{"role": "user", "content": "..."}]},
stream_mode="messages"
):
print(token.content, end="", flush=True)
for step in agent.stream(
{"messages": [{"role": "user", "content": "..."}]},
stream_mode="values"
):
step["messages"][-1].pretty_print()
```
For agents that need access to external resources (databases, APIs), use runtime context:
```python
from dataclasses import dataclass
from langgraph.runtime import get_runtime
from langchain_core.tools import tool
@dataclass
class RuntimeContext:
db: SQLDatabase
api_client: APIClient
@tool
def query_database(query: str) -> str:
"""Execute SQL query against the database."""
runtime = get_runtime(RuntimeContext)
return runtime.context.db.run(query)
agent = create_agent(
model="openai:gpt-4o-mini",
tools=[query_database],
context_schema=RuntimeContext
)
result = agent.invoke(
{"messages": [{"role": "user", "content": "..."}]},
context=RuntimeContext(db=db, api_client=client)
)
```
Always use these import paths for LangChain 1.0+:
```python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents import create_agent
from langchain_tavily import TavilySearch # NOT langchain_community
from langchain_core.runnables import RunnableSequence, RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
```
Replace old chains with LCEL pipelines:
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model | StrOutputParser()
result = chain.invoke({"input": "What is LangChain?"})
for chunk in chain.stream({"input": "Explain LCEL"}):
print(chunk, end="", flush=True)
```
When migrating pre-v1 code:
**User request:** "Create an agent that can search the web and answer questions"
**Your response:**
```python
from langchain.agents import create_agent
from langchain_tavily import TavilySearch
from langchain_openai import ChatOpenAI
search = TavilySearch(max_results=3)
agent = create_agent(
model="openai:gpt-4o-mini",
tools=[search],
system_prompt="You are a helpful research assistant. Use web search to find current information."
)
result = agent.invoke({
"messages": [{"role": "user", "content": "What are the latest developments in AI?"}]
})
print(result["messages"][-1].content)
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-10-development-guide/raw