Generate LangChain.js code for Google's Gemini models (chat, vision, embeddings) with proper configuration, multimodal support, and best practices.
Generate production-ready code for integrating Google's Gemini models with LangChain.js, including chat models, vision capabilities, and embeddings.
This skill helps you implement Google Gemini AI capabilities in LangChain.js applications. It generates proper installation commands, environment configuration, and TypeScript code for:
When a user requests Google Gemini integration with LangChain, follow these steps:
Determine what the user needs:
Always provide installation with dependency alignment:
```bash
npm install @langchain/google-genai @langchain/core
```
If the project uses yarn or pnpm, convert accordingly.
For projects with multiple LangChain packages, include package.json overrides to ensure core version alignment:
```json
{
"resolutions": {
"@langchain/core": "^0.3.0"
},
"overrides": {
"@langchain/core": "^0.3.0"
},
"pnpm": {
"overrides": {
"@langchain/core": "^0.3.0"
}
}
}
```
Include environment setup:
```bash
export GOOGLE_API_KEY=your-api-key
```
Or for .env files:
```
GOOGLE_API_KEY=your-api-key
```
#### For Chat Models:
```typescript
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatGoogleGenerativeAI({
model: "gemini-pro",
maxOutputTokens: 2048,
});
const response = await model.invoke(new HumanMessage("Hello world!"));
```
#### For Vision Models:
```typescript
import fs from "fs";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage } from "@langchain/core/messages";
const vision = new ChatGoogleGenerativeAI({
model: "gemini-pro-vision",
maxOutputTokens: 2048,
});
const image = fs.readFileSync("./image.jpg").toString("base64");
const input = [
new HumanMessage({
content: [
{
type: "text",
text: "Describe the following image.",
},
{
type: "image_url",
image_url: `data:image/png;base64,${image}`,
},
],
}),
];
const res = await vision.invoke(input);
```
#### For Embeddings:
```typescript
import { GoogleGenerativeAIEmbeddings } from "@langchain/google-genai";
import { TaskType } from "@google/generative-ai";
const embeddings = new GoogleGenerativeAIEmbeddings({
modelName: "embedding-001", // 768 dimensions
taskType: TaskType.RETRIEVAL_DOCUMENT,
title: "Document title",
});
const res = await embeddings.embedQuery("Your query text");
```
When generating vision model code, explain that `image_url` supports:
Document key configuration parameters:
Include basic error handling for production code:
```typescript
try {
const response = await model.invoke(message);
console.log(response.content);
} catch (error) {
console.error("Error calling Gemini:", error);
throw error;
}
```
**Example 1: Basic chat integration**
User: "Add Google Gemini chat to my LangChain app"
→ Generate installation, environment setup, and basic ChatGoogleGenerativeAI usage
**Example 2: Image analysis**
User: "I need to analyze images with Gemini"
→ Generate vision model setup with multimodal input handling and base64 encoding
**Example 3: Semantic search**
User: "Create embeddings for my documents using Gemini"
→ Generate GoogleGenerativeAIEmbeddings with RETRIEVAL_DOCUMENT task type
**Example 4: Full stack integration**
User: "Set up Gemini with chat, vision, and embeddings"
→ Generate complete setup with all three capabilities and shared configuration
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-google-gemini-integration/raw