Configure Aider with multi-agent routing via LiteLLM, role-based model switching, and optimized settings for M2 16GB machines. Default to local Qwen route with quick-switch commands for architect, security, CI, research, and power-burst agents.
Configure Aider to work with a local LiteLLM proxy that routes to multiple specialized AI agents (architect, security reviewer, CI assistant, researcher, power-burst). Optimized for M2 16GB machines with sensible defaults.
Sets up Aider with:
Create a file named `.aider.conf.yml` in your project root with the following content:
```yaml
model: openai/local/llama
openai-api-base: http://127.0.0.1:4000/v1
openai-api-key: sk-anything
pretty: true
stream: true
auto-commits: false
git: true
edit-format: diff
show-diffs: true
max-chat-history-tokens: 8192
cache-prompts: true
restore-chat-history: false
encoding: utf-8
ignore-globs: |
*.log
*.tmp
**/node_modules/**
**/dist/**
**/build/**
**/.vite/**
**/.next/**
**/coverage/**
rag/index/*.duckdb*
**/*.pyc
**/__pycache__/**
```
This configuration assumes you have a LiteLLM proxy running at `http://127.0.0.1:4000` with these routes configured:
If you don't have LiteLLM set up, install and configure it according to your agent routing needs.
```bash
aider
```
This uses the default `openai/local/llama` route (fast coder).
Override the model with `--model` flag to use specialized agents:
**Architect/Coder (default):**
```bash
aider --model openai/local/llama
```
**Security Review (Aegis):**
```bash
aider --model openai/local/llama-cpu
```
**Quick CI/PR Tasks (Hermes):**
```bash
aider --model openai/local/llama-small
```
**Research (Elara, requires GOOGLE_API_KEY):**
```bash
aider --model openai/gemini/1.5-pro
```
**Power Burst (requires XAI_API_KEY):**
```bash
aider --model openai/xai/grok-code-fast-1
```
Add project-specific patterns to `ignore-globs` section. Common patterns are already included (node_modules, build artifacts, cache directories).
| Agent | Model Route | Purpose | Hardware |
|-------|-------------|---------|----------|
| Guy | `openai/local/llama` | Architect/Coder | Local Qwen (low VRAM) |
| Aegis | `openai/local/llama-cpu` | Security review | CPU fallback |
| Hermes | `openai/local/llama-small` | Quick CI/PR tasks | Small/fast model |
| Elara | `openai/gemini/1.5-pro` | Research | Cloud (Google API) |
| Power | `openai/xai/grok-code-fast-1` | Heavy lifting | Cloud (xAI) |
```bash
aider
aider --model openai/local/llama-cpu src/auth/*.ts
aider --model openai/gemini/1.5-pro README.md docs/
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/intelgraph-multi-agent-aider-configuration/raw