Interface with 100+ LLM providers (OpenAI, Anthropic, Bedrock, Azure, VertexAI, Groq, etc.) using a unified API format
Interface with 100+ LLM API providers using a unified OpenAI-compatible format. LiteLLM provides both a Python SDK and an AI Gateway (proxy server) for calling models from Bedrock, Azure, OpenAI, VertexAI, Anthropic, Groq, and many other providers with consistent interfaces.
This skill helps you integrate LiteLLM into your projects to:
```bash
pip install litellm
pip install 'litellm[proxy]'
```
Use LiteLLM to call any supported provider with OpenAI-compatible syntax:
```python
from litellm import completion
import os
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key"
response = completion(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
response = completion(
model="anthropic/claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Hello!"}]
)
response = completion(
model="bedrock/anthropic.claude-v2",
messages=[{"role": "user", "content": "Hello!"}]
)
```
Set up a centralized LLM gateway for your team or application:
```bash
litellm --model gpt-4o
litellm --config config.yaml
```
Then call it from any OpenAI-compatible client:
```python
import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
```
Connect MCP servers to any LLM provider:
```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from litellm import experimental_mcp_client
import litellm
server_params = StdioServerParameters(
command="python",
args=["mcp_server.py"]
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Load MCP tools in OpenAI format
tools = await experimental_mcp_client.load_mcp_tools(
session=session,
format="openai"
)
# Use with any LiteLLM model
response = await litellm.acompletion(
model="gpt-4o",
messages=[{"role": "user", "content": "What's 3 + 5?"}],
tools=tools
)
```
Invoke agents from LangGraph, Vertex AI Agent Engine, Azure AI Foundry, etc.:
```python
from litellm.a2a_protocol import A2AClient
from a2a.types import SendMessageRequest, MessageSendParams
from uuid import uuid4
client = A2AClient(base_url="http://localhost:10001")
request = SendMessageRequest(
id=str(uuid4()),
params=MessageSendParams(
message={
"role": "user",
"parts": [{"kind": "text", "text": "Hello!"}],
"messageId": uuid4().hex,
}
)
)
response = await client.send_message(request)
```
Implement retry/fallback logic across multiple deployments:
```python
from litellm import Router
router = Router(
model_list=[
{
"model_name": "gpt-4",
"litellm_params": {
"model": "azure/gpt-4-deployment",
"api_key": os.getenv("AZURE_API_KEY"),
},
},
{
"model_name": "gpt-4",
"litellm_params": {
"model": "openai/gpt-4",
"api_key": os.getenv("OPENAI_API_KEY"),
},
},
]
)
response = router.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
```
1. **Multi-Provider Applications**: Build apps that can switch between LLM providers without code changes
2. **Cost Optimization**: Route requests to the most cost-effective provider based on task requirements
3. **Reliability**: Implement automatic fallbacks when primary provider is down or rate-limited
4. **Team Enablement**: Deploy a central AI Gateway with access control and cost tracking
5. **MCP Integration**: Connect MCP tools to any LLM provider, not just Claude
6. **Agent Orchestration**: Work with agents from multiple platforms using unified A2A protocol
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/litellm-integration/raw