Official Python library for OpenAI API with sync/async clients, streaming, vision, realtime audio, and webhook support
Official Python library providing convenient access to the OpenAI REST API from Python 3.9+ applications. Includes type definitions, synchronous and asynchronous clients, streaming support, vision capabilities, and realtime audio API.
Install the OpenAI Python library from PyPI:
```bash
pip install openai
```
For async HTTP performance with aiohttp:
```bash
pip install openai[aiohttp]
```
Set your API key as an environment variable (recommended):
```bash
export OPENAI_API_KEY="your-api-key-here"
```
Or create a `.env` file with:
```
OPENAI_API_KEY=your-api-key-here
```
Get your API key from: https://platform.openai.com/settings/organization/api-keys
Generate text using the Responses API:
```python
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
response = client.responses.create(
model="gpt-5.2",
instructions="You are a coding assistant that talks like a pirate.",
input="How do I check if a Python object is an instance of a class?",
)
print(response.output_text)
```
Alternative API using chat format:
```python
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "developer", "content": "Talk like a pirate."},
{"role": "user", "content": "How do I check if a Python object is an instance of a class?"}
]
)
print(completion.choices[0].message.content)
```
Process images with URL:
```python
response = client.responses.create(
model="gpt-5.2",
input=[{
"role": "user",
"content": [
{"type": "input_text", "text": "What is in this image?"},
{"type": "input_image", "image_url": "https://example.com/image.jpg"}
]
}]
)
```
Process images with base64 encoding:
```python
import base64
with open("path/to/image.png", "rb") as image_file:
b64_image = base64.b64encode(image_file.read()).decode("utf-8")
response = client.responses.create(
model="gpt-5.2",
input=[{
"role": "user",
"content": [
{"type": "input_text", "text": "What is in this image?"},
{"type": "input_image", "image_url": f"data:image/png;base64,{b64_image}"}
]
}]
)
```
Stream responses using Server-Side Events:
```python
stream = client.responses.create(
model="gpt-5.2",
input="Write a one-sentence bedtime story about a unicorn.",
stream=True
)
for event in stream:
print(event)
```
Use AsyncOpenAI for async operations:
```python
import asyncio
from openai import AsyncOpenAI
client = AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
async def main():
response = await client.responses.create(
model="gpt-5.2",
input="Explain quantum computing to a beginner."
)
print(response.output_text)
asyncio.run(main())
```
Async streaming:
```python
async def stream_example():
stream = await client.responses.create(
model="gpt-5.2",
input="Write a haiku about programming.",
stream=True
)
async for event in stream:
print(event)
asyncio.run(stream_example())
```
Low-latency conversational experiences with text and audio:
```python
import asyncio
from openai import AsyncOpenAI
async def main():
client = AsyncOpenAI()
async with client.realtime.connect(model="gpt-realtime") as connection:
await connection.session.update(
session={"type": "realtime", "output_modalities": ["text"]}
)
await connection.conversation.item.create(
item={
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "Say hello!"}]
}
)
await connection.response.create()
async for event in connection:
if event.type == "response.output_text.delta":
print(event.delta, flush=True, end="")
elif event.type == "response.output_text.done":
print()
elif event.type == "response.done":
break
asyncio.run(main())
```
Handle realtime errors:
```python
async for event in connection:
if event.type == 'error':
print(f"Error: {event.error.message}")
print(f"Code: {event.error.code}")
```
Auto-paginating iterators for list endpoints:
```python
all_jobs = []
for job in client.fine_tuning.jobs.list(limit=20):
all_jobs.append(job)
```
Manual pagination control:
```python
first_page = await client.fine_tuning.jobs.list(limit=20)
if first_page.has_next_page():
next_page = await first_page.get_next_page()
print(f"Fetched {len(next_page.data)} items")
```
Upload files for fine-tuning or assistants:
```python
from pathlib import Path
client.files.create(
file=Path("training_data.jsonl"),
purpose="fine-tune"
)
```
The library uses TypedDicts for request parameters and Pydantic models for responses:
```python
response = client.responses.create(...)
json_str = response.to_json()
dict_data = response.to_dict()
completion = client.chat.responses.create(
input=[{"role": "user", "content": "Hello"}],
model="gpt-5.2",
response_format={"type": "json_object"}
)
```
Enable type checking in VS Code by setting `python.analysis.typeCheckingMode` to `basic`.
Use aiohttp for improved async concurrency:
```python
from openai import AsyncOpenAI, DefaultAioHttpClient
async with AsyncOpenAI(http_client=DefaultAioHttpClient()) as client:
response = await client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-5.2"
)
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/openai-python-sdk/raw