Multi-agent Aider configuration for local LLM routing with role-based model switching (architect, security, CI/PR, research) optimized for M2 MacBook systems
Agent-centric Aider configuration for orchestrating multiple AI roles through local LLM routing.
This skill configures Aider to work with a local LiteLLM proxy server, enabling fast model switching between different agent roles (architect/coder, security reviewer, CI/PR assistant, research). Optimized for M2 MacBook 16GB systems with intelligent caching and performance tuning.
When a user requests this configuration:
1. **Create the `.aider.conf.yml` file** in the project root with the following content:
```yaml
model: openai/local/llama
openai-api-base: http://127.0.0.1:4000/v1
openai-api-key: sk-anything
pretty: true
stream: true
auto-commits: false
git: true
edit-format: diff
show-diffs: true
max-chat-history-tokens: 8192
cache-prompts: true
restore-chat-history: false
encoding: utf-8
ignore-globs: |
*.log
*.tmp
**/node_modules/**
**/dist/**
**/build/**
**/.vite/**
**/.next/**
**/coverage/**
rag/index/*.duckdb*
**/*.pyc
**/__pycache__/**
```
2. **Explain the agent role system** to the user:
The configuration defaults to `openai/local/llama` (the "guy" architect/coder role). Users can switch roles by launching Aider with different model flags:
3. **Verify prerequisites**:
4. **Key configuration features**:
5. **Performance optimizations for M2 systems**:
**Start default coding session (Guy role)**:
```bash
aider
```
**Security review session (Aegis role)**:
```bash
aider --model openai/local/llama-cpu src/auth/*.ts
```
**Quick PR review (Hermes role)**:
```bash
aider --model openai/local/llama-small --read-only
```
**Research mode with Gemini (Elara role)**:
```bash
export GOOGLE_API_KEY=your_key
aider --model openai/gemini/1.5-pro
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/intelgraph-ai-symphony-agent-config/raw