Multi-agent Aider configuration for local LLM routing with role-based model switching and optimized settings for M2 16GB systems
A sophisticated Aider configuration optimized for agent-centric development workflows with local LLM routing via LiteLLM proxy. Designed for efficient resource usage on M2 16GB systems while supporting multiple specialized agent roles.
Configures Aider to work with a local LiteLLM proxy server that routes requests to different models based on role-specific needs. Includes performance optimizations, git integration, and smart file handling patterns for modern development workflows.
This `.aider.conf.yaml` file sets up:
The configuration supports multiple agent personas through model switching:
| Agent | Model Route | Purpose | Requirements |
|-------|-------------|---------|--------------|
| **guy** | `openai/local/llama` | Architect/coder (default) | Local LiteLLM |
| **aegis** | `openai/local/llama-cpu` | Security review | Local LiteLLM |
| **hermes** | `openai/local/llama-small` | Quick CI/PR tasks | Local LiteLLM |
| **elara** | `openai/gemini/1.5-pro` | Research | `GOOGLE_API_KEY` |
| **power** | `openai/xai/grok-code-fast-1` | Power burst coding | `XAI_API_KEY` |
```bash
pip install litellm[proxy]
```
Create a `litellm_config.yaml` file with your model routes:
```yaml
model_list:
- model_name: local/llama
litellm_params:
model: ollama/qwen2.5-coder:14b
api_base: http://localhost:11434
- model_name: local/llama-cpu
litellm_params:
model: ollama/qwen2.5-coder:7b
api_base: http://localhost:11434
- model_name: local/llama-small
litellm_params:
model: ollama/qwen2.5-coder:3b
api_base: http://localhost:11434
general_settings:
master_key: sk-anything
```
```bash
litellm --config litellm_config.yaml --port 4000
```
Save the following as `.aider.conf.yaml` in your project root:
```yaml
model: openai/local/llama
openai-api-base: http://127.0.0.1:4000/v1
openai-api-key: sk-anything
pretty: true
stream: true
auto-commits: false
git: true
edit-format: diff
show-diffs: true
max-chat-history-tokens: 8192
cache-prompts: true
restore-chat-history: false
encoding: utf-8
ignore-globs: |
*.log
*.tmp
**/node_modules/**
**/dist/**
**/build/**
**/.vite/**
**/.next/**
**/coverage/**
rag/index/*.duckdb*
**/*.pyc
**/__pycache__/**
```
**Default agent (guy - architect/coder):**
```bash
aider
```
**Security review agent (aegis):**
```bash
aider --model openai/local/llama-cpu
```
**Quick CI/PR agent (hermes):**
```bash
aider --model openai/local/llama-small
```
**Research agent (elara) - requires GOOGLE_API_KEY:**
```bash
export GOOGLE_API_KEY=your_key_here
aider --model openai/gemini/1.5-pro
```
**Power burst agent - requires XAI_API_KEY:**
```bash
export XAI_API_KEY=your_key_here
aider --model openai/xai/grok-code-fast-1
```
Automatically ignores:
```bash
aider
```
```bash
aider --model openai/local/llama-cpu
```
```bash
aider --model openai/local/llama-small
```
For systems with more/less RAM:
```yaml
max-chat-history-tokens: 16384 # More context (needs more RAM)
max-chat-history-tokens: 4096 # Less context (saves RAM)
```
In your `litellm_config.yaml`:
```yaml
model_list:
- model_name: local/custom-model
litellm_params:
model: ollama/your-model:tag
api_base: http://localhost:11434
```
Then use with:
```bash
aider --model openai/local/custom-model
```
Extend the `ignore-globs` section:
```yaml
ignore-globs: |
# ... existing patterns ...
**/your-custom-dir/**
*.your-extension
```
**Connection Error:**
```bash
litellm --config litellm_config.yaml --port 4000
```
**Model Not Found:**
```bash
ollama pull qwen2.5-coder:14b
```
**Out of Memory:**
Search the marketplace for:
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/intelgraph-ai-symphony-aider-config-yc1n65/raw