AI agent instructions for Bun-first development with LLM conversation systems. Prefers Bun over Node.js, npm, and Vite. Includes guidance for multi-participant AI conversations with structured responses.
This skill has safety concerns that you should review before use. Some patterns were detected that may pose a risk.Safety score: 60/100.
KillerSkills scans all public content for safety. Use caution before installing or executing flagged content.
This skill configures AI agents to work with Bun-first codebases, specifically for LLM conversation systems that enable multiple AI participants to have natural conversations.
When working with this codebase, understand the two-tier architecture:
Always default to Bun instead of Node.js tooling:
1. **Execution**: Use `bun <file>` instead of `node <file>` or `ts-node <file>`
2. **Testing**: Use `bun test` instead of `jest` or `vitest`
3. **Building**: Use `bun build <file.html|file.ts|file.css>` instead of webpack or esbuild
4. **Package Management**: Use `bun install` instead of npm/yarn/pnpm
5. **Scripts**: Use `bun run <script>` instead of npm/yarn/pnpm run
6. **Environment**: Bun automatically loads .env files - don't use dotenv package
Prefer Bun's built-in APIs over Node.js equivalents:
```bash
bun install
bun run index.ts
bun --hot index.ts
bun test
```
```typescript
import { test, expect } from "bun:test";
test("conversation response parsing", () => {
expect(response.nextSpeaker).toBe("ExpectedSpeaker");
});
```
Use HTML imports with `Bun.serve()` instead of Vite. HTML files can directly import .tsx, .jsx, .js, and .css files:
**Server (index.ts):**
```typescript
import index from "./index.html"
Bun.serve({
routes: {
"/": index,
"/api/users/:id": {
GET: (req) => new Response(JSON.stringify({ id: req.params.id }))
}
},
websocket: {
open: (ws) => ws.send("Connected"),
message: (ws, message) => ws.send(message)
},
development: {
hmr: true,
console: true
}
})
```
**HTML (index.html):**
```html
<html>
<body>
<h1>Hello, world!</h1>
<script type="module" src="./frontend.tsx"></script>
</body>
</html>
```
**Frontend (frontend.tsx):**
```tsx
import React from "react";
import './index.css'; // Direct CSS import
import { createRoot } from "react-dom/client";
const root = createRoot(document.body);
export default function Frontend() {
return <h1>Hello, world!</h1>;
}
root.render(<Frontend />);
```
Run with: `bun --hot ./index.ts`
Each LLM participant requires:
1. System sets initial topic and first speaker
2. Current speaker generates response using persona and conversation history
3. Speaker nominates next participant
4. Turn management transfers speaking rights
5. Loop continues until conversation ends
When adding participants:
1. Ensure `name` matches a value in `SpeakerName` enum (schemas.ts)
2. Use valid OpenRouter model identifiers for `model`
3. Provide clear character instructions in `persona`
**Structured (index.ts):**
**Manual (SpeakingRightManager):**
Required environment variable:
```
OPENROUTER_API_KEY=your_key_here
```
Bun automatically loads .env files - no need for dotenv package.
1. **Always prefer Bun**: Don't suggest Node.js, npm, or traditional bundlers unless explicitly unavoidable
2. **Schema Synchronization**: When adding participants, update SpeakerName enum in schemas.ts
3. **Model Selection**: Use valid OpenRouter model identifiers
4. **Response Parsing**: Match parsing approach (structured vs manual) to the conversation system being used
5. **History Limits**: Respect configured message limits to manage token usage
When working with files matching these patterns, apply Bun-first principles:
For more information, consult Bun API documentation at `node_modules/bun-types/docs/**.md`.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/bun-first-development-zs60re/raw