Integrate @node-llm/core for provider-agnostic LLM interactions with unified API across 540+ models. Supports chat, streaming, tools, structured output, and multimodal capabilities.
Integrate `@node-llm/core` (v1.9.0+), a production-grade, provider-agnostic LLM engine for Node.js with unified API for 540+ models across OpenAI, Anthropic, Gemini, DeepSeek, OpenRouter, Ollama, and AWS Bedrock.
Sets up `@node-llm/core` in Node.js/TypeScript projects to provide standardized LLM interactions with automatic tool execution, structured output via Zod, streaming support, multimodal capabilities, and built-in security circuit breakers.
Install the core package:
```bash
npm install @node-llm/core
```
For persistence (optional), add the ORM:
```bash
npm install @node-llm/orm
```
Create or update `.env` with required API keys. The library automatically reads from environment variables:
```env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
DEEPSEEK_API_KEY=...
OPENROUTER_API_KEY=...
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
```
Create a basic chat interface:
```typescript
import { createLLM } from "@node-llm/core";
// Initialize LLM (reads API key from env automatically)
const llm = createLLM({
provider: "openai", // or "anthropic", "gemini", "deepseek", "openrouter", "ollama", "bedrock"
requestTimeout: 15000, // 15s timeout
maxTokens: 4096, // cost protection
maxRetries: 3, // retry protection
maxToolCalls: 5 // infinite loop protection
});
// Standard request
const response = await llm.chat("gpt-4o").ask("What is the speed of light?");
console.log(response.content);
// Streaming
for await (const chunk of llm.chat("gpt-4o").stream("Tell me a story")) {
process.stdout.write(chunk.content);
}
```
Implement type-safe structured responses:
```typescript
import { createLLM, z } from "@node-llm/core";
const llm = createLLM({ provider: "openai" });
// Define schema
const PlayerSchema = z.object({
name: z.string(),
powerLevel: z.number(),
abilities: z.array(z.string()),
class: z.enum(["warrior", "mage", "rogue"])
});
// Get typed response
const chat = llm.chat("gpt-4o-mini").withSchema(PlayerSchema);
const response = await chat.ask("Generate a level 10 warrior character");
// Fully typed response
console.log(response.parsed.name); // string
console.log(response.parsed.powerLevel); // number
console.log(response.parsed.abilities); // string[]
```
Implement automated tool execution:
```typescript
import { createLLM } from "@node-llm/core";
const llm = createLLM({ provider: "openai" });
const tools = [
{
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location"]
},
execute: async ({ location, unit = "celsius" }) => {
// Your weather API call here
return { temperature: 22, unit, conditions: "sunny" };
}
}
];
// Automatic tool loop execution
const response = await llm
.chat("gpt-4o")
.withTools(tools)
.ask("What's the weather in Tokyo?");
console.log(response.content); // "It's currently 22°C and sunny in Tokyo"
```
Implement vision and audio capabilities:
```typescript
import { createLLM } from "@node-llm/core";
import fs from "fs";
const llm = createLLM({ provider: "openai" });
// Vision
const imageBuffer = fs.readFileSync("./image.jpg");
const visionResponse = await llm.chat("gpt-4o").ask({
text: "What's in this image?",
images: [imageBuffer]
});
// Audio transcription (Whisper)
const audioBuffer = fs.readFileSync("./audio.mp3");
const transcription = await llm.transcribe(audioBuffer, {
model: "whisper-1",
language: "en"
});
console.log(transcription.text);
```
Demonstrate provider flexibility:
```typescript
import { createLLM } from "@node-llm/core";
// Same code, different providers
const providers = ["openai", "anthropic", "gemini", "deepseek"];
for (const provider of providers) {
const llm = createLLM({ provider });
const response = await llm.chat().ask("Hello!");
console.log(`${provider}: ${response.content}`);
}
// OpenRouter for 540+ models
const router = createLLM({ provider: "openrouter" });
const response = await router
.chat("anthropic/claude-3-opus")
.ask("Complex reasoning task...");
```
Implement multi-turn conversations:
```typescript
import { createLLM } from "@node-llm/core";
const llm = createLLM({ provider: "anthropic" });
const messages = [
{ role: "user", content: "What is 2+2?" },
{ role: "assistant", content: "4" },
{ role: "user", content: "What about multiplied by 3?" }
];
const response = await llm.chat("claude-3-5-sonnet-20241022").ask(messages);
console.log(response.content); // "12"
```
Integrate `@node-llm/orm` for automatic database tracking:
```typescript
import { createLLM } from "@node-llm/core";
import { OrmAdapter } from "@node-llm/orm";
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();
const orm = new OrmAdapter(prisma);
const llm = createLLM({
provider: "openai",
adapter: orm // Automatically saves history, metrics, tool calls
});
// All interactions now tracked in database
const response = await llm.chat("gpt-4o").ask("Hello!");
```
Implement robust error handling:
```typescript
import { createLLM } from "@node-llm/core";
const llm = createLLM({
provider: "openai",
requestTimeout: 10000, // Request timeout protection
maxTokens: 2048, // Cost overrun protection
maxRetries: 2, // Retry storm protection
maxToolCalls: 3 // Infinite tool loop protection
});
try {
const response = await llm.chat("gpt-4o").ask("Your prompt");
console.log(response.content);
} catch (error) {
if (error.code === "TIMEOUT") {
console.error("Request timed out");
} else if (error.code === "MAX_TOKENS_EXCEEDED") {
console.error("Token limit exceeded");
} else {
console.error("LLM error:", error.message);
}
}
```
1. **Provider Selection**: Choose provider based on requirements (OpenAI for reasoning, Anthropic for extended thinking, Gemini for video, Ollama for local/offline, OpenRouter for model variety)
2. **Model Selection**: Specify model in `.chat(modelName)` or use provider defaults
3. **Security**: Always configure circuit breakers (timeout, maxTokens, maxRetries, maxToolCalls) in production
4. **Streaming**: Use `.stream()` for real-time responses, `.ask()` for complete responses
5. **Tool Execution**: Library handles tool loops automatically—no manual recursion needed
6. **Type Safety**: Use `.withSchema()` for structured output with full TypeScript inference
7. **Multimodal**: Pass buffers/URLs for images, audio, video (provider-dependent)
8. **Environment**: Never hardcode API keys—always use environment variables
9. **Testing**: Use Ollama provider for local development/testing without API costs
10. **Documentation**: Reference [node-llm.eshaiju.com](https://node-llm.eshaiju.com/) for advanced features (custom providers, RAG, embeddings)
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/node-llmcore-integration/raw