Multi-agent Aider configuration with role-based model switching via LiteLLM proxy. Optimized for M2 16GB systems with local and cloud model routing.
Agent-centric Aider configuration that enables role-based model switching through a LiteLLM proxy. Routes between local (Qwen) and cloud models (Gemini, xAI) based on task requirements.
This configuration sets up Aider with:
Ensure you have:
Create a `.aider.conf.yml` file in your project root with these settings:
```yaml
model: openai/local/llama
openai-api-base: http://127.0.0.1:4000/v1
openai-api-key: sk-anything
pretty: true
stream: true
auto-commits: false
git: true
edit-format: diff
show-diffs: true
max-chat-history-tokens: 8192
cache-prompts: true
restore-chat-history: false
encoding: utf-8
ignore-globs: |
*.log
*.tmp
**/node_modules/**
**/dist/**
**/build/**
**/.vite/**
**/.next/**
**/coverage/**
rag/index/*.duckdb*
**/*.pyc
**/__pycache__/**
```
Override the model for specific agent roles:
**Architect/Coder (guy)**
```bash
aider --model openai/local/llama
```
**Security Review (aegis)**
```bash
aider --model openai/local/llama-cpu
```
**CI/PR Tasks (hermes)**
```bash
aider --model openai/local/llama-small
```
**Research (elara)**
```bash
export GOOGLE_API_KEY=your_key
aider --model openai/gemini/1.5-pro
```
**Power Burst (high-capacity cloud)**
```bash
export XAI_API_KEY=your_key
aider --model openai/xai/grok-code-fast-1
```
**Standard coding session:**
```bash
aider src/main.py src/utils.py
```
**Security review of changes:**
```bash
aider --model openai/local/llama-cpu src/auth.py
```
**Quick PR fixes:**
```bash
aider --model openai/local/llama-small --message "Fix lint errors"
```
**Research and architecture:**
```bash
aider --model openai/gemini/1.5-pro --architect-mode
```
Excluded patterns:
Your LiteLLM config should map the model routes:
```yaml
model_list:
- model_name: local/llama
litellm_params:
model: ollama/qwen2.5-coder
- model_name: local/llama-cpu
litellm_params:
model: ollama/qwen2.5-coder:7b
- model_name: local/llama-small
litellm_params:
model: ollama/qwen2.5-coder:3b
```
**Multi-file refactoring with architect mode:**
```bash
aider --model openai/local/llama src/**/*.py --architect-mode
```
**Security-focused code review:**
```bash
aider --model openai/local/llama-cpu --read-only src/auth.py src/middleware.py
```
**Bulk PR fixes with small model:**
```bash
aider --model openai/local/llama-small --yes --message "Apply formatting across codebase"
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/intelgraph-ai-symphony-aider-config/raw