GenAI agent framework for building production-grade applications with LLMs using Pydantic validation, type safety, and dependency injection
A Python agent framework designed to help you build production-grade GenAI applications and workflows quickly, confidently, and painlessly.
Pydantic AI brings the FastAPI development experience to GenAI app and agent development. Built by the Pydantic team, it provides a type-safe, model-agnostic framework with seamless observability, powerful evaluation tools, and production-ready features like durable execution and human-in-the-loop tool approval.
When a user requests help with Pydantic AI, follow these steps:
Determine what the user wants to build:
Install Pydantic AI with the appropriate provider:
```bash
pip install 'pydantic-ai[openai]'
pip install 'pydantic-ai[anthropic]'
pip install 'pydantic-ai[gemini]'
pip install 'pydantic-ai[openai,anthropic]'
pip install 'pydantic-ai-slim[all]'
```
Create an agent with the appropriate configuration:
**Basic Agent:**
```python
from pydantic_ai import Agent
agent = Agent(
'openai:gpt-4o',
instructions='Your instructions here',
)
```
**Agent with Structured Output:**
```python
from pydantic import BaseModel, Field
from pydantic_ai import Agent
class OutputModel(BaseModel):
field1: str = Field(description='Description')
field2: int = Field(description='Description', ge=0, le=100)
agent = Agent(
'anthropic:claude-sonnet-4-0',
output_type=OutputModel,
instructions='Your instructions here',
)
```
**Agent with Dependencies:**
```python
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext
@dataclass
class Dependencies:
db: DatabaseConn
user_id: int
agent = Agent(
'openai:gpt-4o',
deps_type=Dependencies,
output_type=OutputModel,
instructions='Your instructions here',
)
```
Use the `@agent.instructions` decorator for context-aware instructions:
```python
@agent.instructions
async def dynamic_instructions(ctx: RunContext[Dependencies]) -> str:
user_data = await ctx.deps.db.get_user(ctx.deps.user_id)
return f"The user's name is {user_data.name}"
```
Add tools that the agent can call:
```python
@agent.tool
async def get_balance(ctx: RunContext[Dependencies], include_pending: bool) -> float:
"""Returns the user's account balance.
Args:
include_pending: Whether to include pending transactions
"""
return await ctx.deps.db.get_balance(
ctx.deps.user_id,
include_pending=include_pending
)
@agent.tool(require_approval=True)
async def transfer_money(
ctx: RunContext[Dependencies],
amount: float,
recipient: str
) -> str:
"""Transfer money to another account."""
await ctx.deps.db.transfer(ctx.deps.user_id, recipient, amount)
return f"Transferred ${amount} to {recipient}"
```
Execute the agent synchronously or asynchronously:
**Synchronous:**
```python
result = agent.run_sync('User query here')
print(result.output)
```
**Asynchronous:**
```python
result = await agent.run('User query here', deps=deps)
print(result.output)
```
**With streaming:**
```python
async with agent.run_stream('User query here') as stream:
async for message in stream.stream_text():
print(message, end='', flush=True)
```
Integrate with Pydantic Logfire for monitoring:
```python
import logfire
from pydantic_ai import Agent
logfire.configure()
agent = Agent(
'openai:gpt-4o',
instructions='Your instructions here',
)
with logfire.span('agent-run'):
result = await agent.run('User query')
```
Create evaluation tests:
```python
from pydantic_ai.eval import eval_agent
test_cases = [
('What is my balance?', lambda r: 'balance' in r.output.lower()),
('Transfer $100', lambda r: r.output.block_transfer is False),
]
results = await eval_agent(agent, test_cases, deps=deps)
```
**Model Context Protocol (MCP):**
```python
from pydantic_ai.mcp import MCPClient
mcp_client = MCPClient('server-url')
agent = Agent('openai:gpt-4o', mcp_client=mcp_client)
```
**Durable Execution:**
```python
from pydantic_ai.durable import DurableAgent
durable_agent = DurableAgent(
'openai:gpt-4o',
storage_backend=storage,
instructions='Your instructions',
)
```
**Graph-Based Workflows:**
```python
from pydantic_ai.graph import Graph, Node
graph = Graph()
node1 = Node(agent1, 'node1')
node2 = Node(agent2, 'node2')
graph.add_edge(node1, node2, condition=lambda r: r.should_continue)
```
**Minimal Example:**
```python
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o', instructions='Be concise.')
result = agent.run_sync('Explain Pydantic AI in one sentence.')
print(result.output)
```
**Structured Output Example:**
```python
from pydantic import BaseModel
from pydantic_ai import Agent
class Analysis(BaseModel):
sentiment: str
confidence: float
agent = Agent('anthropic:claude-sonnet-4-0', output_type=Analysis)
result = agent.run_sync('Analyze: I love this framework!')
print(f"Sentiment: {result.output.sentiment}, Confidence: {result.output.confidence}")
```
**Tool Example:**
```python
from pydantic_ai import Agent, RunContext
agent = Agent('openai:gpt-4o')
@agent.tool
async def get_weather(ctx: RunContext[None], city: str) -> str:
"""Get current weather for a city."""
# In production, call a real weather API
return f"Sunny, 72°F in {city}"
result = await agent.run('What's the weather in San Francisco?')
print(result.output)
```
Leave a review
No reviews yet. Be the first to review this skill!