LM Studio Qwen3 Coder Configuration
Configure Aider to work with LM Studio running Qwen3-Coder-30B locally, using whole-file editing mode with streamlined git workflow and optimized performance settings.
What This Configuration Does
This `.aider.conf.yaml` file configures Aider to:
Connect to LM Studio's local OpenAI-compatible APIUse the Qwen3-Coder-30B model for code generationEnable whole-file editing mode for clearer diffsAuto-accept architectural changes without promptingReduce noise with minimal warnings and streamlined outputMaintain git integration without auto-commitsInstructions
1. **Verify LM Studio is running** with Qwen3-Coder-30B loaded and the local server started on `http://localhost:1234`
2. **Create the configuration file** in your project root as `.aider.conf.yaml`:
```yaml
############################
LLM (LM Studio)
############################
model: openai/qwen3-coder-30b
openai-api-base: http://localhost:1234/v1
openai-api-key: lmstudio
verify-ssl: false
timeout: 300
############################
Editing behavior
############################
edit-format: whole
architect: false
auto-accept-architect: true
show-diffs: false
yes-always: true
############################
Repo-map
############################
map-tokens: 1024
map-refresh: auto
############################
Git safety
############################
git: true
gitignore: false
auto-commits: false
dirty-commits: false
git-commit-verify: false
############################
Output / UX
############################
pretty: true
stream: true
dark-mode: true
############################
Noise reduction
############################
show-model-warnings: false
max-chat-history-tokens: 12000
```
3. **Launch Aider** in your project directory - it will automatically load this configuration:
```bash
aider
```
4. **Start coding** - Aider will now use whole-file editing with LM Studio's local model
Configuration Breakdown
LLM Settings
**model**: Points to Qwen3-Coder-30B via OpenAI-compatible endpoint**openai-api-base**: LM Studio's local server address**timeout**: 300 seconds for longer generation tasks**verify-ssl**: Disabled for local connectionsEditing Behavior
**edit-format: whole** - Replaces entire files instead of diff-based edits (clearer for review)**architect: false** - Disables architectural planning mode**auto-accept-architect: true** - Auto-accepts architectural changes when they occur**yes-always: true** - Skips confirmation prompts for faster workflowRepo-map
**map-tokens: 1024** - Moderate context window for repository structure**map-refresh: auto** - Updates repo map automatically as neededGit Safety
**git: true** - Enables git integration**auto-commits: false** - You control when commits happen**dirty-commits: false** - Prevents commits with uncommitted changes**git-commit-verify: false** - Skips commit verification hooksOutput Settings
**pretty: true** - Formatted output with syntax highlighting**stream: true** - Real-time streaming of model responses**dark-mode: true** - Dark theme for terminal output**show-model-warnings: false** - Hides model deprecation warnings**max-chat-history-tokens: 12000** - Keeps reasonable conversation contextImportant Notes
**LM Studio must be running** before launching Aider - verify the model is loaded and the server is active**Whole-file editing** means Aider replaces entire files rather than applying patches - easier to review but uses more tokens**No auto-commits** means you control the git workflow - review changes and commit manually**Local model** means no API costs and full privacy - all processing happens on your machineAdjust `timeout` value if you experience model response timeouts on complex tasksCustomization Tips
Change `map-tokens` to 2048 or 4096 for larger codebasesSet `auto-commits: true` if you prefer automatic git commits after each changeSwitch `edit-format` to `diff` for token efficiency on large filesIncrease `max-chat-history-tokens` to 24000 for longer conversationsChange `openai-api-base` port if LM Studio runs on a different port