Install and configure @langchain/aws for AWS Bedrock Converse chat models with streaming, service tiers, and application inference profiles
This skill helps you install and configure the `@langchain/aws` package for integrating AWS Bedrock Converse models into your LangChain.js application.
1. **Install the package**
```bash
npm install @langchain/aws
```
2. **Ensure dependency alignment** by adding these fields to `package.json`:
```json
{
"dependencies": {
"@langchain/aws": "^1.2.2",
"@langchain/core": "^0.3.0"
},
"resolutions": {
"@langchain/core": "^0.3.0"
},
"overrides": {
"@langchain/core": "^0.3.0"
},
"pnpm": {
"overrides": {
"@langchain/core": "^0.3.0"
}
}
}
```
3. **Configure AWS authentication** using one of these methods:
**Method A: Access Keys**
```bash
export BEDROCK_AWS_REGION=us-east-1
export BEDROCK_AWS_SECRET_ACCESS_KEY=your-secret-key
export BEDROCK_AWS_ACCESS_KEY_ID=your-access-key
```
**Method B: Bearer Token (API Key)**
```bash
export BEDROCK_AWS_REGION=us-east-1
export AWS_BEARER_TOKEN_BEDROCK=your-bearer-token
```
```typescript
import { ChatBedrockConverse } from "@langchain/aws";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatBedrockConverse({
region: process.env.BEDROCK_AWS_REGION ?? "us-east-1",
credentials: {
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
},
});
const response = await model.invoke(new HumanMessage("Hello world!"));
```
```typescript
const model = new ChatBedrockConverse({
region: process.env.BEDROCK_AWS_REGION ?? "us-east-1",
credentials: {
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
},
});
const stream = await model.stream(new HumanMessage("Hello world!"));
for await (const chunk of stream) {
console.log(chunk.content);
}
```
Control latency, cost, and capacity with service tiers:
**Set at construction:**
```typescript
const model = new ChatBedrockConverse({
region: "us-east-1",
credentials: { /* ... */ },
serviceTier: "priority", // priority | default | flex | reserved
});
```
**Override per invocation:**
```typescript
const response = await model.invoke("Translate this", {
serviceTier: "flex"
});
```
Route requests across regions using application inference profiles:
```typescript
const model = new ChatBedrockConverse({
region: "us-east-1",
model: "anthropic.claude-3-haiku-20240307-v1:0",
applicationInferenceProfile: "arn:aws:bedrock:eu-west-1:123456789102:application-inference-profile/fm16bt65tzgx",
credentials: {
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
},
});
const response = await model.invoke(new HumanMessage("Hello!"));
```
**Important:** Always provide the `model` parameter even when using an inference profile, as it ensures proper cost and latency tracking in LangSmith.
1. Check if the project has `package.json`
2. Install `@langchain/aws` using the appropriate package manager (npm, yarn, or pnpm)
3. Add dependency resolution fields to `package.json`
4. Check for existing AWS environment variables or create `.env` file
5. Create or update the chat model initialization code
6. If streaming is requested, implement the streaming pattern
7. If service tiers are mentioned, add the `serviceTier` configuration
8. If cross-region routing is needed, configure `applicationInferenceProfile`
9. Add appropriate imports from `@langchain/core/messages`
10. Test the basic invocation to ensure connectivity
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-aws-integration/raw