Multi-agent Aider configuration for local AI coding with role-based model routing, optimized for M2 16GB systems
Configure Aider with a multi-agent architecture using local LLM routing through LiteLLM proxy, optimized for resource-constrained systems.
Sets up Aider to work with locally-hosted AI models through a LiteLLM proxy at `http://127.0.0.1:4000/v1`, enabling role-based agent switching (architect, security reviewer, CI tasks, research) with intelligent performance tuning for systems with limited VRAM.
1. **Create the configuration file** at `.aider.conf.yml` in your project root with the following content:
```yaml
model: openai/local/llama
openai-api-base: http://127.0.0.1:4000/v1
openai-api-key: sk-anything
pretty: true
stream: true
auto-commits: false
git: true
edit-format: diff
show-diffs: true
max-chat-history-tokens: 8192
cache-prompts: true
restore-chat-history: false
encoding: utf-8
ignore-globs: |
*.log
*.tmp
**/node_modules/**
**/dist/**
**/build/**
**/.vite/**
**/.next/**
**/coverage/**
rag/index/*.duckdb*
**/*.pyc
**/__pycache__/**
```
2. **Ensure LiteLLM proxy is running** on `http://127.0.0.1:4000` with your model routes configured (llama, llama-cpu, llama-small, etc.)
3. **Launch Aider** with default configuration:
```bash
aider
```
4. **Switch agents by overriding the model** for specific tasks:
- **Architect/Coder (default)**: `aider` or `aider --model openai/local/llama`
- **Security Review**: `aider --model openai/local/llama-cpu`
- **Quick CI/PR Tasks**: `aider --model openai/local/llama-small`
- **Research**: `aider --model openai/gemini/1.5-pro` (requires `GOOGLE_API_KEY`)
- **Power Burst**: `aider --model openai/xai/grok-code-fast-1` (requires `XAI_API_KEY`)
```bash
aider
aider --model openai/local/llama-cpu /add src/auth/*.py
aider --model openai/local/llama-small "Fix the linting errors in components/"
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/intelgraph-ai-symphony-aider-config-4u6g74/raw