Configure and use OpenAI-compatible AI providers with the Vercel AI SDK. Set up custom providers, manage authentication, and implement chat, completion, and embedding models with type-safe model IDs.
This skill helps you set up and configure OpenAI-compatible AI providers using the Vercel AI SDK's `@ai-sdk/openai-compatible` package. Use this for implementing providers that expose an OpenAI-compatible API with a lighter-weight alternative to the full OpenAI provider.
First, install the required package:
```bash
npm i @ai-sdk/openai-compatible
```
Also ensure you have the core AI SDK installed:
```bash
npm i ai
```
Create a provider instance using `createOpenAICompatible`:
```typescript
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import { generateText } from 'ai';
const provider = createOpenAICompatible({
baseURL: 'https://api.example.com/v1',
name: 'example',
apiKey: process.env.MY_API_KEY,
});
const { text } = await generateText({
model: provider.chatModel('meta-llama/Llama-3-70b-chat-hf'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
```
For providers requiring custom authentication headers:
```typescript
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
const provider = createOpenAICompatible({
baseURL: 'https://api.example.com/v1',
name: 'example',
headers: {
Authorization: `Bearer ${process.env.MY_API_KEY}`,
'Custom-Header': 'value',
},
});
```
Add TypeScript types for model ID auto-completion:
```typescript
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
type ExampleChatModelIds =
| 'meta-llama/Llama-3-70b-chat-hf'
| 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo'
| (string & {});
type ExampleCompletionModelIds =
| 'codellama/CodeLlama-34b-Instruct-hf'
| 'Qwen/Qwen2.5-Coder-32B-Instruct'
| (string & {});
type ExampleEmbeddingModelIds =
| 'BAAI/bge-large-en-v1.5'
| 'bert-base-uncased'
| (string & {});
const provider = createOpenAICompatible<
ExampleChatModelIds,
ExampleCompletionModelIds,
ExampleEmbeddingModelIds
>({
baseURL: 'https://api.example.com/v1',
name: 'example',
apiKey: process.env.MY_API_KEY,
});
// Now you get auto-completion for model IDs
const { text } = await generateText({
model: provider.chatModel('meta-llama/Llama-3-70b-chat-hf'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
```
Always store API keys securely in environment variables:
```bash
MY_API_KEY=your_api_key_here
```
Ensure `.env.local` is in your `.gitignore`.
When implementing for a user's request:
1. **Identify the provider details**: Base URL, authentication method, available models
2. **Choose authentication approach**: `apiKey` parameter vs custom `headers`
3. **Define model IDs**: Add TypeScript types if the user wants auto-completion
4. **Test the configuration**: Generate a simple text completion to verify connectivity
```typescript
const together = createOpenAICompatible({
baseURL: 'https://api.together.xyz/v1',
name: 'together',
apiKey: process.env.TOGETHER_API_KEY,
});
const { text } = await generateText({
model: together.chatModel('meta-llama/Llama-3-70b-chat-hf'),
prompt: 'Explain quantum computing',
});
```
```typescript
const perplexity = createOpenAICompatible({
baseURL: 'https://api.perplexity.ai',
name: 'perplexity',
apiKey: process.env.PERPLEXITY_API_KEY,
});
const { text } = await generateText({
model: perplexity.chatModel('llama-3.1-sonar-small-128k-online'),
prompt: 'What are the latest developments in AI?',
});
```
```typescript
const groq = createOpenAICompatible({
baseURL: 'https://api.groq.com/openai/v1',
name: 'groq',
apiKey: process.env.GROQ_API_KEY,
});
const { text } = await generateText({
model: groq.chatModel('llama3-70b-8192'),
prompt: 'Write a Python function to calculate fibonacci',
});
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/openai-compatible-provider-setup/raw