Local Ollama Aider Setup
Configure Aider to use local Ollama models for AI-assisted coding with optimized settings for performance, git integration, and developer experience.
What This Does
Sets up Aider with:
Qwen2.5-Coder models (7B for main tasks, 3B for simple operations)Automatic git commits and lintingOptimized diff-based editingEnhanced chat history and context managementDark mode terminal output with streaming responsesInstructions
1. **Verify Ollama Installation**
- Check if Ollama is running: `curl http://localhost:11434/api/tags`
- If not installed, visit https://ollama.ai to install Ollama first
2. **Pull Required Models**
- Pull the main model: `ollama pull qwen2.5-coder:7b`
- Pull the weak model: `ollama pull qwen2.5-coder:3b`
- Verify models are available: `ollama list`
3. **Create Aider Configuration File**
- Location: `~/.aider.conf.yml` (Linux/macOS) or `%USERPROFILE%\.aider.conf.yml` (Windows)
- Create the file with the following content:
```yaml
Aider Configuration for Local Ollama Models
Documentation: https://aider.chat/docs/config.html
Default model - Qwen2.5-Coder 7B (Fast & Smart!)
model: ollama/qwen2.5-coder:7b
Weak model for simpler tasks (faster)
weak-model: ollama/qwen2.5-coder:3b
Edit format - how aider makes changes
Options: diff, whole, udiff, diff-fenced
edit-format: diff
Git integration
auto-commits: true
dirty-commits: true
auto-lint: true
Output preferences
pretty: true
stream: true
dark-mode: true
Show git diffs when reviewing changes
show-diffs: true
Map settings (code context)
map-tokens: 2048
map-refresh: auto
Chat history
restore-chat-history: true
Don't show model warnings for local models
show-model-warnings: false
Ollama connection (default is localhost:11434)
If your Ollama runs elsewhere, uncomment and modify:
openai-api-base: http://localhost:11434/v1
```
4. **Test the Configuration**
- Navigate to a git repository: `cd /path/to/your/project`
- Start Aider: `aider`
- Verify it connects to Ollama and loads the correct model
- Try a simple request: "Create a hello.py file with a hello world function"
5. **Customize Settings (Optional)**
- If Ollama runs on a different host/port, uncomment `openai-api-base` and update the URL
- Adjust `map-tokens` (default: 2048) based on your model's context window
- Change `edit-format` to `whole` if you prefer complete file rewrites instead of diffs
- Set `auto-commits: false` if you prefer manual git commits
Configuration Options Explained
**model**: Primary model for complex coding tasks (7B parameter version for better quality)**weak-model**: Lighter model for simple operations (3B version for faster responses)**edit-format**: `diff` mode shows only changes, reducing token usage and making edits clearer**auto-commits**: Automatically commits changes after each edit with descriptive messages**dirty-commits**: Allows commits even with uncommitted changes (useful for iterative development)**auto-lint**: Runs configured linters after edits to catch issues immediately**pretty/stream/dark-mode**: Enhanced terminal output for better readability**show-diffs**: Display git diffs for review before accepting changes**map-tokens**: Context window size for code understanding (2048 is balanced for most projects)**map-refresh**: Auto-updates code context as files change**restore-chat-history**: Resumes previous conversations when restarting AiderUsage Examples
After setup, use Aider with these commands:
```bash
Start Aider in current directory
aider
Add specific files to context
aider file1.py file2.js
Use in read-only mode (no edits)
aider --read-only
One-off request without interactive mode
aider --message "Add error handling to process_data function"
```
Important Notes
**Ollama must be running** before starting Aider (check with `ollama list`)The 7B model requires ~8GB RAM; 3B model needs ~4GBFirst inference may be slow as models load into memoryGit repository required for auto-commit features to workConfiguration applies globally; override with command-line flags per sessionFor remote Ollama instances, update `openai-api-base` with the correct URLTroubleshooting
**"Model not found" error**: Run `ollama pull qwen2.5-coder:7b` and `ollama pull qwen2.5-coder:3b`**Connection refused**: Verify Ollama is running with `ollama serve`**Slow responses**: Consider using only the 3B model or reducing `map-tokens`**Auto-commits failing**: Ensure you're in a git repository with `git init`